Wednesday 5 February 2014

Digital Twin Spark Ignition

Definition
It is very interesting to know about complete combustion in automobile engineering. Because in actual practice, perfect combustion is not at all possible due to various losses in the combustion chamber as well as design of the internal combustion engine. Moreover the process of burning of the fuel is also not instantaneous. However an alternate solution to it is by making the combustion of fuel as fast as possible. This can be done by using two spark
plugs which spark alternatively at a certain time interval so as increase the diameter of the flame & burn the fuel instantaneously. This system is called DTSI (Digital Spark Ignition system). In this system, due to twin sparks, combustion will be complete. This paper represents the working of digital twin spark ignition system, how twin sparks are produced at 20,000 Volts, their timings, efficiency, advantages & disadvantages, diameter of the flame, how complete combustion is possible & how to decrease smoke & exhausts from the exhaust pipe of the bike using Twin Spark System.
DIGITAL TWIN SPARK ignition engine has two Spark plugs located at opposite ends of the combustion chamber and hence fast and efficient combustion is obtained. The benefits of this efficient combustion process can be felt in terms of better fuel efficiency and lower emissions. The, ignition system on the Twin spark is a digital system with static spark advance and no moving parts subject to wear. It is mapped by the integrated digital electronic control box which also handles fuel injection and valve timing. It features two plugs per cylinder.
This innovative solution, also entailing a special configuration of the hemispherical combustion chambers and piston heads, ensures a fast, wide flame front when the air-fuel mixture is ignited, and therefore less ignition advance, enabling, moreover, relatively lean mixtures to be used. This technology provides a combination of the light weight and twice the power offered by two-stroke engines with a significant power boost, i.e. a considerable"power-to-weight ratio" compared to quite a few four-stroke engines. Fig.1. show the actual picture of Bajaj Pulsar Bike.
Moreover, such a system can adjust idling speed & even cuts off fuel feed when the accelerator pedal is released, and meters the enrichment of the air-fuel mixture for cold starting and accelerating purposes; if necessary, it also prevents the upper rev limit from being exceeded. At low revs, the over boost is mostly used when overtaking, and this is why it cuts out automatically. At higher revving speeds the over boost will enhance full power delivery and will stay on as long as the driver exercises maximum pressure on the accelerator.
Main Characteristics
 Digital electronic ignition with two plugs per cylinder and two ignition distributors;
 Twin overhead cams with camshaft timing variation.
 Injection fuel feed with integrated electronic twin spark ignition.
 a high specific power
 compact design
 superior balance
This power unit, equipping the naturally aspirated 2-litre used on the Alfa 164, is a direc Ldeflvative of the engine fitted on the 2.0 Twin Spark version of the Alfa 75, a recent addition to the Alfa car range. It includes a number of exclusive engineering solutions resulting in superior power output and exceptional peak torque for this cylinder capacity. Its main characteristics are:
 Digital electronic ignition with two plugs per cylinder and two ignition distributors;
 Twin overhead cams with camshaft timing variation;
 Injection fuel feed with integrated electronic twin spark ignition.

Biodeisel

Definition
Bio-diesel is a vegetable oil processed to resemble Diesel Fuel. The first use of peanut oil was made in 1895 by Dr. Rudolf Diesel himself (1858-1913), who predicted- "The use of vegetable oils engine fuels may seem insignificant today. But such oils may become in course of time as important as petroleum and the coal tar products of the present time." Bio-diesel is the ethyl or methyl ester of fatty acid. Bio-diesel is made from virgin or used vegetable oils (both edible and non-edible) and animal fats through trans-esterification. Just like diesel, bio-diesel operates in compression ignition engines, which essentially require very little or no engine modifications up to require very little or no engine modifications up to 20% blends, and minor modifications for higher percentage blends because bio-diesel is similar to diesel but is very eco-friendly.
The Recent depletion and fluctuation in prices due to uncertain supplies for fossil fuel, make us to search renewable, safe and non-polluting sources of energy. India is not self sufficient in petroleum and has to import about two third of its requirements. Presently Indian Government spend Rupees 90,000 crores for petroleum fuel and annual consumption is around 40 millions tons. One of the solutions to the current oil crisis and toward off any future energy and economic crunch is to explore the feasibility of substitution of diesel with an alternative fuel which can be produced in our country on a massive scale to commercial utilization.
Indian Government, research institution and automobile industries are taking interest on bio-diesel from various non-edible oil bearing trees like Jatropha, Karanji, Mahua & Neem. As India is short of edible oils even for human consumption and since the cost of edible oil is also very high, it is preferable to use non-edible oils. Jatropha curcas is one of the prospective bio-diesel yielding crops. This paper highlights our work on alternate fuels and the importance of choosing jatropha. It reduces pollution drastically in terms of sulphates and carbon mono-oxide. To start with, we reduced the viscosity problem faced to a large extent by carrying out the transesterification process in our chemistry laboratory. we also studied the cost factor involved in the usage of jatropha. Performance test was conducted on an electrical loaded diesel engine and a study on the emissions was made using Exhaust Gas Analyser in our thermal laboratory. The pollution levels came down drastically and performance was better with various blends of jatropha and diesel.

Process Explanation

If methanol is used in the above reaction, it is termed methanolysis and fatty acid methyl esters are generated, which are called biodiesel. Three consecutive and reversible reactions are believed to occur in the transesterification which are given below:
Triglyceride + ROH Catalyst Diglyceride + R' COOR
Diglyceride + ROH Catalyst Monoglyceride + R" COOR
Monoglyceride +ROH Catalyst Glycerol + R"' COOR
The first step is the conversion of triglycerides to diglycerides, followed by the conversion of diglycerides to monoglycerides, and finally monoglycerides to glycerol, yielding one methyl ester molecule from each glyceride at each step. When methanol is used in the esterification A catalyst and excess alcohol are used to increase rate of reaction and to shift the equilibrium to the product side, respectively .

Performance Test On IC Engine

The engine used for the present investigation is a single cylinder "comet" vertical Diesel Engine (1500rpm, 3.5kW, water cooled). Engine is coupled with an eddy current dynamometer. In the present work, the experiments were carried out at constant speed and for varying load conditions i.e., no load, 25%, 50%, 75% and 100% of the rated load. The injection parameters were kept constant for the existing engine for entire test program. The static fuel injection timing and the fuel injection pressure for the given engine the 27 o before TDC and 220 bars respectively as specified by the manufacturer. The engine was started and warm-up with diesel fuel and then the diesel fuel line was cut off and simultaneously the fuel line, which connects the fuel under investigation, was opened. No additives were added to the system before conducting the test. Esterified vegetable oil was injected directly to the combustion chamber as conventional fuel injection. The test was done separately for the four fuels, which are taken for the investigation. In each case the observations were recorded after steady state was reached.

Thermo Acoustic Refrigeration

Definition
Thermo acoustic have been known for over years but the use of this phenomenon to develop engines and pumps is fairly recent. Thermo acoustic refrigeration is one such phenomenon that uses high intensity sound waves in a pressurized gas tube to pump heat from one place to other to produce refrigeration effect. In this type of refrigeration all sorts of conventional refrigerants are eliminated and sound waves take their place. All we need is a loud speaker and an acoustically insulated tube. Also this system completely eliminates the need for lubricants and results in 40% less energy consumption. Thermo acoustic heat engines have the advantage of operating with inert gases and with little or no moving parts, making them highly efficient ideal candidate for environmentally-safe refrigeration with almost zero maintenance cost. Now we will look into a thermo acoustic refrigerator, its principle and functions .

Basic Functioning

In a nut shell, a thermo acoustic engine converts heat from a high-temperature source into acoustic power while rejecting waste heat to a low temperature sink. A thermo acoustic refrigerator does the opposite, using acoustic power to pump heat from a cool source to a hot sink. These devices perform best when they employ noble gases as their thermodynamic working fluids. Unlike the chemicals used in refrigeration over the years, such gases are both nontoxic and environmentally benign. Another appealing feature of thermo acoustics is that one can easily flange an engine onto a refrigerator, creating a heat powered cooler with no moving parts at all.
The principle can be imagined as a loud speaker creating high amplitude sound waves that can compress refrigerant allowing heat absorption. The researches have exploited the fact that sound waves travel by compressing and expanding the gas they are generated in.
Suppose that the above said wave is traveling through a tube. Now, a temperature gradient can be generated by putting a stack of plates in the right place in the tube, in which sound waves are bouncing around. Some plates in the stack will get hotter while the others get colder. All it takes to make a refrigerator out of this is to attach heat exchangers to the end of these stacks.
It is interesting to note that humans feel pain when they hear sound above 120 decibels, while in this system sound may reach amplitudes of 173 decibels. But even if the fridge is to crack open, the sound will not be escaping to outside environment, since this intense noise can only be generated inside the pressurized gas locked inside the cooling system. It is worth noting that, prototypes of the technology has been built and one has even flown inside a space shuttle.
Thermo acoustic refrigerators now under development use sound waves strong enough to make your hair catch fire, says inventor Steven L Garrett. But this noise is safely contained in a pressurized tube. If the tube gets shattered, the noise would instantly dissipate to harmless levels. Because it conducts heat, such intense acoustic power is a clean, dependable replacement for cooling systems that use ozone destroying chlorofluorocarbons (CFCs). Now a scientist Hofler is also developing super cold cryocoolers capable of temperatures as low as -135°F (180°K). he hopes to achieve -243°F (120°K) because such cryogenic temperatures would keep electronic components cool in space or speed the function of new microprocessorsa

Solar Cars

Definition
The first solar car invented was a tiny 15-inch vehicle created by William G. Cobb of General Motors. Called the Sun mobile, Cobb showcased the first solar car at the Chicago Powerama convention on August 31, 1955. The solar car was made up 12 selenium photovoltaic cells and a small Pooley electric motor turning a pulley which in turn rotated the rear wheel shaft. The first solar car in history was obviously too small to drive . Now let's jump to 1962 when the first solar car that a person could drive was demonstrated to the public. The International Rectifier Company converted a vintage model 1912 Baker electric car (pictured above) to run on photovoltaic energy in 1958, but they didn't show it until 4 years later. Around 10,640 individual solar cells were mounted to the rooftop of the Baker to help propel it.
In 1977, Alabama University professor Ed Passereni built the Bluebird solar car, which was a prototype full scale vehicle. The Bluebird was supposed to move from power created by the photovoltaic cells only without the use of a battery. The Bluebird was exhibited in the Knoxville, TN 1982 World's Fair.Between 1977 and 1980 (the exact dates are not known for sure), at Tokyo Denki University, professor Masaharu Fujita first created a solar bicycle, then a 4-wheel solar car. The car was actually two solar bicycles put together. In 1979 Englishman Alain Freeman invented a solar car (pictured right). He road registered the same vehicle in 1980. The Freeman solar car was a 3-wheeler with a solar panel on the roof.
Energy Flow For A Solar Car
The energy from the sun strikes the earth throughout the entire day . However, the amount of energy changes due to the time of day, weather conditions, and geographic location. The amount of available solar energy is known as the solar isolation and is most commonly measured in watts per meter squared or W / m 2. In India on a bright sunny day in the early afternoon the solar isolation will be roughly around 1000 W / m 2, but in the mornings, evenings, or when the skies are overcast, the solar isolation will fall towards 0 W / m 2. It must understand how the available isolation changes in order to capture as much of the available energy as possible.
The sunlight hits the cells of the solar array, which produces an electrical current. The energy (current) can travel to the batteries for storage; go directly to the motor controller, or a combination of both. The energy sent to the controller is used to power the motor that turns the wheel and makes the car moves.
Generally if the car is in motion, the converted sun light is delivered directly to the motor controller, but there are times when there is more energy coming from the may than the motor controller needs. When this happens, the extra energy gets stored in the batteries for later use.
When the solar may can't produce enough energy to drive the motor at the desired speed, the array's energy is supplemented with stored energy from the batteries.
Of course, when the car is not in motion, all the energy from the solar may is stored in the batteries. There is also a way to get back some of the energy used to propel the car. When the car is being slowed down, instead of using the normal mechanical brakes, the motor is turned into a generator and energy flows backwards through the motor controller and into the batteries for storage. This is known as regenerative braking. The amount of energy returned to the batteries is small, but every bit helps.
Application
•  This concept can be utilized to build a single sitter four wheel vehicles in practice.
•  It can be extended to more commercial form of four wheeler vehicle.
•  In industry where small vehicles are used to perform light weight conveys work from one place to other place.
It can be used places where, fuel based vehicles are banned due to production of pollution and noise

Six Stroke Engine

Definition
Six Stroke engine, the name itself indicates a cycle of six strokes out of which two are useful power strokes. According to its mechanical design, the six-stroke engine with external and internal combustion and double flow is similar to the actual internal reciprocating combustion engine. However, it differentiates itself entirely, due to its thermodynamic cycle and a modified cylinder head with two supplementary chambers: combustion and an air heating chamber, both independent from the cylinder. In this the cylinder and the combustion chamber are separated which gives more freedom for design analysis. Several advantages result from this, one very important being the increase in thermal efficiency.
It consists of two cycles of operations namely external combustion cycle and internal combustion cycle, each cycle having four events. In addition to the two valves in the four stroke engine two more valves are incorporated which are operated by a piston arrangement.
The Six Stroke is thermodynamically more efficient because the change in volume of the power stroke is greater than the intake stroke and the compression stroke. The main advantages of six stroke engine includes reduction in fuel consumption by 40%, two power strokes in the six stroke cycle, dramatic reduction in pollution, adaptability to multi fuel operation. Six stroke engine's adoption by the automobile industry would have a tremendous impact on the environment and world economy.
Analysis Of Six Stroke Engine
Six-stroke engine is mainly due to the radical hybridization of two- and four-stroke technology. The six-stroke engine is supplemented with two chambers, which allow parallel function and results a full eight-event cycle: two four-event-each cycles, an external combustion cycle and an internal combustion cycle. In the internal combustion there is direct contact between air and the working fluid, whereas there is no direct contact between air and the working fluid in the external combustion process. Those events that affect the motion of the crankshaft are called dynamic events and those, which do not effect are called static events.
SIX-STROKE ENGINE CYCLE DIAGRAM

Multi Air Engine

Definition
The operating principle of the system, applied to intake valves, is the following: a piston, moved by a mechanical intake camshaft, is connected to the intake valve through a hydraulic chamber, which is controlled by a normally open on/off solenoid valve. When the solenoid valve is closed, the oil in the hydraulic chamber behaves like a solid body and transmits to the intake valves the lift schedule imposed by the mechanical intake camshaft. When the solenoid valve is open, the hydraulic chamber and the intake valves are de-coupled; the intake valves do not follow the intake camshaft anymore and close under the valve spring action.
The final part of the valve closing stroke is controlled by a dedicated hydraulic brake, to ensure a soft and regular landing phase in any engine operating conditions. Through solenoid valve opening and closing time control, a wide range of optimum intake valve opening schedules can be easily obtained. For maximum power, the solenoid valve is always closed and full valve opening is achieved following completely the mechanical camshaft, which is specifically designed to maximise power at high engine speed (long opening time).
For low-rpm torque, the solenoid valve is opened near the end of the camshaft profile, leading to early intake valve closing. This eliminates unwanted backflow into the manifold and maximises the air mass trapped in the cylinders. In engine part-load, the solenoid valve is opened earlier, causing partial valve openings to control the trapped air mass as a function of the required torque. Alternatively the intake valves can be partially opened by closing the solenoid valve once the mechanical camshaft action has already started. In this case the air stream into the cylinder is faster and results in higher in-cylinder turbulence. The last two actuation modes can be combined in the same intake stroke, generating a so-called Multilift mode that enhances turbulence and combustion rate at very low loads.
MultiJet for multiple injections, small diesel engines, and the recent Modular Injection technology, soon to be
Similarly, MultiAir technology will pave the way to further technological evolutions for petrol engines:
Integration of the MultiAir Direct air mass control with direct petrol Injection to further improve transient response and fuel economy. Introduction of more advanced multiple valve opening strategies to further reduce emissions. Innovative engine-turbocharger matching to control trapped air mass through a combination of optimum boost pressure and valve opening strategies.
While electronic petrol injection developed in the '70s and Common Rail developed in the '90s were fuel-specific breakthrough technologies, MultiAir Electronic Valve Control technology can be applied to all internal combustion engines whatever fuel they burn.
MultiAir, initially developed for spark ignition engines burning light fuel ranging from petrol to natural gas and hydrogen, also has wide potential for diesel engine emissions reduction

Pistonless Pump

Definition
Rocket engines requires a tremendous amount of fuel high at high pressure .Often th pump costs more than the thrust chamber.One way to supply fuel is to use the expensive turbopump mentioned above,another way is to pressurize fuel tank. Pressurizing a large fuel tank requires a heavy , expensive tank. However suppose instead of pressurizing entire tank, the main tank is drained into a small pump chamber which is then pressurized. To achieve steady flow, the pump system consists of two pump chambers such that each one supplies fuel for ½ of each cycle. The pump is powered by pressurized gas which acts directly on fluid. For each half of the pump system, a chamber is filled from the main tank under low pressure and at a high flow rate, then the chamber is pressurized, and then the fluid is delivered to the engine at a moderate flow rate under high pressure. The chamber is then vented and cycle repeats.
The system is designed so that the inlet flow rate is higher than the outlet flow rate.This allows time for one chamber to be vented , refilled and pressurized while the other is being emptied.A bread board pump has been tested and it works great .A high version has been designed and built and is pumping at 20 gpm and 550psi.
Nearly all of the hardware in this pump consists of pressure vessels, so the weight is low.There are less than 10 moving parts , and no lubrication issues which might cause problems with other pumps. The design and constr. Of this pump is st, forward and no precision parts are required .This device has advantage over standard turbopumps in that the wt. is about the same, the unit,engg.and test costs are less and the chance for catastrophic failure is less.This pump has the advantage over pressure fed designs in that the wt. of the complete rocket is much less, and the rocket is much safer because the tanks of rocket fuel do not need to be at high pressure.The pump could be started after being stored for an extended period with high reliability.It can be used to replace turbopumps for rocket booster opn. or it can be used to replace high pressure tanks for deep space propulsion.It can also be used for satellite orbit changes and station keeping.
Performance Validation:
A calculation of the weight of this type of pump shows that the power to weight ratio would be dominated by the pressure chamber and that it would be of the order of 8-12 hp per lb., for a 5 second cycle using a composite chamber. This performance is similar to state of the art gas-generator turbopump technology. (The F1 turbopump on the Saturn V put out 20 hp/lb) This pump could be run until dry, so it would achieve better residual propellant scavenging than a turbopump. This system would require a supply of gaseous or liquid Helium which would be heated by a heat exchanger mounted on the combustion chamber before it was used to pressurize the fuel, as in the Ariane rocket.. The volume of gas required would be equivalent to a standard pressure fed design, with a small additional amount to account for ullage in the pump chambers. The rocket engine itself could be a primarily ablative design, as in the NASA Fastrac, scorpious rocket or in recent rocket engine tests.

Micro Air Vehicles

Definition
Micro air vehicles are either fixed-wing aircraft , rotary-wing aircraft ( helicopter ), or flapping-wing (of which the ornithopter is a subset) designs; with each being used for different purposes. Fixed-wing craft require higher, forward flight speeds to stay airborne, and are therefore able to cover longer distances; however they are unable to effectively manoeuvre inside structures such as buildings. Rotary-wing designs allow the craft to hover and move in any direction, at the cost of requiring closer proximity for launch and recovery. Flapping-wing-powered flight has yet to reach the same level of maturity as fixed-wing and rotary-wing designs. However, flapping-wing designs, if fully realized, would boast a manoeuvrability that is superior to both fixed- and rotary-wing designs due to the extremely high wing loadings achieved via unsteady aerodynamics .
Usages
The Black Widow is the current state-of-the-art MAV and is an important benchmark. It is the product of 4 years of research by Aerovironment and DARPA. The Black Widow has a 6-inch wingspan and weighs roughly 56 grams. The plane has a flight range of 1.8 kilometres, a flight endurance time of 30 minutes, and a max altitude of 769 feet. The plane carries a surveillance camera. In addition it utilizes computer controlled systems to ease control.
The Black Widow is made out of form; individual pieces were cut using a hot wire mechanism with a CNC machine allowing for greater accuracy.
The University of Florida has been very successful over the past five years in the MAV competitions. In 2001 they won in both the heavy lift and surveillance categories. Their plane was constructed of a resilient plastic attached to a carbon fibre web structure. This resulted in a crash resistant airfoil.
Working Principle
Newton's first law states a body at rest will remain at rest or a body in motion will continue in straight-line motion unless subjected to an external applied force . That means, if one sees a bend in the flow of air, or if air originally at rest is accelerated into motion, there is force acting on it. Newton's third law states that for every action there is an equal and opposite reaction . As an example, an object sitting on a table exerts a force on the table (its weight) and the table puts an equal and opposite force on the object to hold it up. In order to generate lift a wing must do something to the air. What the wing does to the air is the action while lift is the reaction.
Let's compare two figures used to show streams of air (streamlines) over a wing. The air comes straight at the wing, bends around it, and then leaves straight behind the wing. We have all seen similar pictures, even in flight manuals. But, the air leaves the wing exactly as it appeared ahead of the wing. There is no net action on the air so there can be no lift. Figure 3.7 shows the streamlines, as they should be drawn. The air passes over the wing and is bent down. The bending of the air is the action. The reaction is the lift on the wing.

i-VTEC

Definition
The most important challenge facing car manufacturers today is to offer vehicles that deliver excellent fuel efficiency and superb performance while maintaining cleaner emissions and driving comfort. This paper deals with   i-VTEC(intelligent-Variable valve Timing and lift Electronic Control) engine technology which is one of the advanced technology in the IC engine. i-VTEC is the new trend in Honda's latest large capacity four cylinder petrol engine family. The name is derived from 'intelligent' combustion control technologies that match outstanding fuel economy, cleaner emissions and reduced weight with high output and greatly improved torque characteristics in all speed range. The design cleverly combines the highly renowned VTEC system - which varies the timing and amount of lift of the valves - with Variable Timing Control.
VTC is able to advance and retard inlet valve opening by altering the phasing of the inlet camshaft to best match the engine load at any given moment. The two systems work in concern under the close control of the engine management system delivering improved cylinder charging and combustion efficiency, reduced intake resistance, and improved exhaust gas recirculation among the benefits. i-VTEC technology offers tremendous flexibility since it is able to fully maximize engine potential over its complete range of operation. In short Honda's i-VTEC technology gives us the best in vehicle performance.
The latest and most sophisticated VTEC development is i-VTEC ("intelligent" VTEC), which combines features of all the various previous VTEC systems for even greater power band width and cleaner emissions. With the latest i-VTEC setup, at low rpm the timing of the intake valves is now staggered and their lift is asymmetric, which creates a swirl effect within the combustion chambers. At high rpm, the VTEC transitions as previously into a high-lift, long-duration cam profile.
 The i-VTEC system utilizes Honda's proprietary VTEC system and adds VTC (Variable Timing Control), which allows for dynamic/continuous intake valve timing and overlap control. The demanding aspects of fuel economy, ample torque, and clean emissions can all be controlled and provided at a higher level with VTEC (intake valve timing and lift control) and VTC (valve overlap control) combined.
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0xz0sgYkny_wH8vwxdGWYZVi8di4XYJ9dq_IJ8ZVlhSEN9Q1grF5Mxtgycvll9Igyc_kUyz0Z08GFcD43PwAKI4LEn8ZmjkPfFAoxmwrEsyHUjBJhOccEKsb6l8mwWb9OwX3xZEFlBXRs/s320/Untitled.png
The i stands for   ntelligent: i-VTEC is intelligent-VTEC. Honda introduced many new innovations in i-VTEC, but the most significant one is the addition of a variable valve opening overlap mechanism to the VTEC system. Named VTC for Variable Timing Control, the current (initial) implementation is on the intake camshaft and allows the valve opening overlap between the intake and exhaust valves to be continuously varied during engine operation. This allows for a further refinement to the power delivery characteristics of VTEC, permitting fine-tuning of the mid-band power delivery of the engine.
Specifications  Of   1.8l i-VTEC Engine
Ø     Engine type and number of cylinders         Water-cooled in-line 4-cylinder
Ø     Displacement                                               1,799 cc
Ø     Max power / rpm                                          103 kW (138 hp)/ 6300
Ø     Torque / rpm                                                 174 Nm (128 lb-ft)/4300
Ø     Compression ratio                                        10.5:1

IC Engine

Definition
The Internal Combustion Engine is an engine in which the combustion of a fuel (generally, fossil fuel) occurs with an oxidizer (usually air) in a combustion chamber. In an internal combustion engine the expansion of the high temperature and pressure gases, which are produced by the combustion, directly applies force to a movable component of the engine, such as the pistons or turbine blades and by moving it over a distance, generate useful mechanical energy. The term internal combustion engine usually refers to an engine in which combustion is intermittent, such as the more familiar four-stroke and two-stroke piston engines, along with variants, such as the Wankel rotary engine. A second class of internal combustion engines use continuous combustion: gas turbines, jet engines and most rocket engines.
Invention of the two-stroke cycle is attributed to Scottish engineer Dugald Clerk who in 1881 patented his design, his engine having a separate charging cylinder. The crankcase-scavenged engine, employing the area below the piston as a charging pump, is generally credited to Englishman Joseph Day (and Frederick Cock for the piston-controlled inlet port).

A two-stroke engine is an internal combustion engine that completes the thermodynamic cycle in two movements of the piston compared to twice that number for a four-stroke engine. This increased efficiency is accomplished by using the beginning of the compression stroke and the end of the combustion stroke to perform simultaneously the intake and exhaust (or scavenging) functions. In this way two-stroke engines often provide strikingly high specific power. Gasoline (spark ignition) versions are particularly useful in lightweight (portable) applications such as chainsaws and the concept is also used in diesel compression ignition engines in large and non-weight sensitive applications such as ships and locomotives.

Today, internal combustion engines in cars, trucks, motorcycles, aircraft, construction machinery and many others, most commonly use a four-stroke cycle. The four strokes refer to intake, compression, combustion (power), and exhaust strokes that occur during two crankshaft rotations per working cycle of the gasoline engine and diesel engine. A less technical description of the four-stroke cycle is, "Suction, Compression, Ignition, Exhaust" The cycle begins at top dead center (TDC), when the piston is farthest away from the axis of the crankshaft. A stroke refers to the full travel of the piston from Top Dead Center (TDC) to Bottom Dead Center (BDC).
Common Rail Direct Fuel Injection (CRDi) Engines :
Common rail direct fuel injection is a modern variant of direct fuel injection system for petrol and diesel engines. On diesel engines, it features a high-pressure (over 1,000 bar/15,000 psi) fuel rail feeding individual solenoid valves, as opposed to low-pressure fuel pump feeding unit injectors. Third-generation common rail diesels now feature piezoelectric injectors for increased precision, with fuel pressures up to 1,800 bar/26,000 psi.

The common rail system prototype was developed in the late 1960s by Robert Huber of Switzerland and the technology further developed by Dr. Marco Ganser at the Swiss Federal Institute of Technology in Zurich, later of Ganser-Hydromag AG (est.1995) in Oberägeri.

The first successful usage in production vehicle began in Japan by the mid-1990s. Dr. Shohei Itoh and Masahiko Miyaki of the Denso Corporation, a Japanese automotive parts manufacturer, developed the common rail fuel system for heavy duty vehicles and turned it into practical use on their ECD-U2 common-rail system mounted on the Hino Rising Ranger truck and sold for general use in 1995. Denso claims the first commercial high pressure common rail system in 1995.

Modern common rail systems, whilst working on the same principle, are governed by an engine control unit (ECU) which opens each injector electronically rather than mechanically. This was extensively prototyped in the 1990s with collaboration between Magneti Marelli, Centro Ricerche Fiat and Elasis. After research and development by the Fiat Group, the design was acquired by the German company Robert Bosch GmbH for completion of development and refinement for mass-production. In hindsight the sale appeared to be a tactical error for Fiat as the new technology proved to be highly profitable. The company had little choice but to sell, however, as it was in a poor financial state at the time and lacked the resources to complete development on its own.[3] In 1997 they extended its use for passenger cars. The first passenger car that used the common rail system was the 1997 model Alfa Romeo 156 1.9 JTD,[4] and later on that same year Mercedes-Benz C 220 CDI.

Common rail engines have been used in marine and locomotive applications for some time. The Cooper-Bessemer GN-8 (circa 1942) is an example of a hydraulically operated common rail diesel engine, also known as a modified common rail.

Principle
Solenoid or piezoelectric valves make possible fine electronic control over the fuel injection time and quantity, and the higher pressure that the common rail technology makes available provides better fuel atomisation. In order to lower engine noise the engine's electronic control unit can inject a small amount of diesel just before the main injection event ("pilot" injection), thus reducing its explosiveness and vibration, as well as optimising injection timing and quantity for variations in fuel quality, cold starting, and so on. Some advanced common rail fuel systems perform as many as five injections per stroke.

Common rail engines require no heating up time [citation needed] and produce lower engine noise and emissions than older systems.

Diesel engines have historically used various forms of fuel injection. Two common types include the unit injection system and the distributor/inline pump systems (See diesel engine and unit injector for more information). While these older systems provided accurate fuel quantity and injection timing control they were limited by several factors:
* They were cam driven and injection pressure was proportional to engine speed. This typically meant that the highest injection pressure could only be achieved at the highest engine speed and the maximum achievable injection pressure decreased as engine speed decreased. This relationship is true with all pumps, even those used on common rail systems; with the unit or distributor systems, however, the injection pressure is tied to the instantaneous pressure of a single pumping event with no accumulator and thus the relationship is more prominent and troublesome.

Antimatter

Definition

Antimatter rockets are what the majority of people think about when talking of rockets for the future. This is hardly surprising as it is such an attractive word for the writers of science fiction.
It is, however, not only interesting in the realm of science fiction. Make no mistake; antimatter is real. Small amounts, in the order of nanograms, are produced at special facilities every year. It is also the most expensive substance of Earth; in 1999 the estimated cost for 1 gram of antimatter was about $62.5 trillion.
The reason it is so attractive for propulsion is the energy density that it possesses. Consider that the ideal energy density for chemical reactions is 1 x 107 (10^7) J/kg, for nuclear fission it is 8 x 1013 (10^13) J/kg and for nuclear fusion it is 3 x 1014 (10^14) J/kg, but for the matter-antimatter annihilation it is 9 x 1016 (10^16) J/kg. This is 1010 (10 billion) times that of conventional chemical propellants.
This represents the highest energy release per unit mass of any known reaction in physics. The reason for this is that the annihilation is the complete conversion of matter into energy governed by Einstein's famous equation E=mc2, rather than just the part conversion that occurs in fission and fusion.
Antimatter is exactly what you might think it is -- the opposite of normal matter, of which the majority of our universe is made. Until just recently, the presence of antimatter in our universe was considered to be only theoretical. In 1928, British physicist Paul A.M. Dirac revised Einstein's famous equation E=mc2. Dirac said that Einstein didn't consider that the "m" in the equation -- mass -- could have negative properties as well as positive. Dirac's equation (E = + or - mc2) allowed for the existence of anti-particles in our universe. Scientists have since proven that several anti-particles exist.
These anti-particles are, literally, mirror images of normal matter. Each anti-particle has the same mass as its corresponding particle, but the electrical charges are reversed
Particle Annihilation:
When antimatter comes into contact with normal matter, these equal but opposite particles collide to produce an explosion emitting pure radiation, which travels out of the point of the explosion at the speed of light. Both particles that created the explosion are completely annihilated, leaving behind other subatomic particles. The explosion that occurs when antimatter and matter interact transfers the entire mass of both objects into energy. Scientists believe that this energy is more powerful than any that can be generated by other propulsion methods.
The problem with developing antimatter propulsion is that there is a lack of antimatter existing in the universe. If there were equal amounts of matter and antimatter, we would likely see these reactions around us. Since antimatter doesn't exist around us, we don't see the light that would result from it colliding with matter.
It is possible that particles outnumbered anti-particles at the time of the Big Bang. As stated above, the collision of particles and anti-particles destroys both. And because there may have been more particles in the universe to start with, those are all that's left. There may be no naturally-existing anti-particles in our universe today. However, scientists discovered a possible deposit of antimatter near the center of the galaxy in 1977. If that does exist, it would mean that antimatter exists naturally, and the need to make our own antimatter would be eliminated.

Antiproton Decelerator :
The Antiproton Decelerator is a very special machine compared to what already exists at CERN and other laboratories around the world. So far, an "antiparticle factory" consisted of a chain of several accelerators, each one performing one of the steps needed to produce antiparticles. The CERN antiproton complex is a very good example of this.
At the end of the 70's CERN built an antiproton source called the Antiproton Accumulator (AA). Its task was to produce and accumulate high-energy antiprotons to feed into the SPS in order to transform it into a "proton-antiproton collider". As soon as antiprotons became available, physicists realized how much could be learned by using them at low energy, so CERN decided to build a new machine: LEAR, the Low Energy Antiproton Ring. Antiprotons accumulated in the AA were extracted, decelerated in the PS and then injected into LEAR for further deceleration. In 1986 a second ring, the Antiproton Collector (AC), was built around the existing AA in order to improve the antiproton production rate by a factor of 10.
The AC is now being transformed into the AD, which will perform all the tasks that the AC, AA, PS and LEAR used to do with antiprotons, i.e. produce, collect, cool, decelerate and eventually extract them to the experiments.
How does the AD work ?
Antiparticles have to be created from energy (remember: E = mc2). This energy is obtained with protons that have been previously accelerated in the PS. These protons are smashed into a block of metal, called a target. We use Copper or Iridium targets mainly because they are easy to cool. Then, the abrupt stopping of such energetic particles releases a huge amount of energy into a small volume, heating it up to such temperatures that matter-antimatter particles are spontaneously created. In about one collision out of a million, an antiproton-proton pair is formed. But given the fact that about 10 trillion protons hit the target (about once per minute), this still makes a good 10 million antiprotons heading towards the AD.
The newly created antiprotons behave like a bunch of wild kids; they are produced almost at the speed of light, but not all of them have exactly the same energy (this is called "energy spread"). Moreover, they run randomly in all directions, also trying to break out 'sideways' ("transverse oscillations"). Bending and focusing magnets make sure they stay on the right track, in the middle of the vacuum pipe, while they begin to race around in the ring.
At each turn, the strong electric fields inside the radio-frequency cavities begin to decelerate the antiprotons. Unfortunately, this deceleration increases the size of their transverse oscillations: if nothing is done to cure that, all antiprotons are lost when they eventually collide with the vacuum pipe. To avoid that, two methods have been invented: 'stochastic' and 'electron cooling'. Stochastic (or 'random') cooling works best at high speeds (around the speed of light, c), and electron cooling works better at low speed (still fast, but only 10-30 % of c). Their goal is to decrease energy spread and transverse oscillations of the antiproton beam.
Finally, when the antiparticles speed is down to about 10% of the speed of light, the antiprotons squeezed group (called a "bunch") is ready to be ejected. One "deceleration cycle" is over: it has lasted about one minute.
A strong 'kicker' magnet is fired in less than a millionth of a second, and at the next turn, all antiprotons are following a new path, which leads them into the beam pipes of the extraction line. There, additional dipole and quadrupole magnets steer the beam into one of the three experiments.

Application of Nitrous Oxide in Automobiles

Definition

Nitrous oxide, also known as dinitrogen oxide or dinitrogen monoxide, is a chemical compound with chemical formula N2O. Under room conditions it is a colourless non-flammable gas, with a pleasant slightly sweet odor. It is commonly known as laughing gas due to the exhilarating effects of inhaling it, and because it can cause spontaneous laughter in some users. It is used in surgery and dentistry for its anaesthetic and analgesic effects. Nitrous oxide is present in the atmosphere where it acts as a powerful greenhouse gas.
The gas was discovered by Joseph Priestley in 1772. Humphry Davy in the 1790s tested the gas on himself and some of his friends, including the poets Samuel Taylor Coleridge and Robert Southey. They soon realised that nitrous oxide considerably dulled the sensation of pain, even if the inhaler were still semi-conscious, and so it came into use as an anaesthetic, particularly by dentists, who do not typically have access to the services of an anesthesiologist and who may benefit from a patient who can respond to verbal commands.

The structure of the nitrous oxide molecule is a linear chain of a nitrogen atom bound to a second nitrogen, which in turn is bound to an oxygen atom. It can be considered a resonance hybrid of
\mbox{N} \equiv \mbox{N}^+ - \mbox{O}^-

and
\mbox{N}^-= \mbox{N}^+= \mbox{O}\;
Nitrous oxide N2O should not be confused with the other nitrogen oxides such as nitric oxide NO and nitrogen dioxide NO2.Nitrous oxide can be used to produce nitrites by mixing it with boiling alkali metals, and to oxidize organic compounds at high temperatures.
Principle:
The objective of nitrous oxide is to make more horsepower, which is achieved in two ways. Firstly, nitrous oxide comprises one-part oxygen and two-part nitrogen. This is a much higher percentage of oxygen than that found in the atmosphere and, because of this; the additional oxygen being forced into the combustion chamber provides more potential power. Nonetheless, the additional power cannot be realized safely without enrichening the amount of fuel in the combustion chamber. The second way nitrous oxide will increase an engine's horsepower is by cooling the air charge from the atmosphere.
One of the most important aspects of keeping an engine healthy when using nitrous oxide is to ensure it operates at the proper air/fuel ratio. Running too lean can cause detonation, resulting in damaged engine parts. Running too rich can also harm performance and destroy engine parts, too. Once calibrated, they'll inject the proper amount of fuel with the nitrous system to maintain the correct air/fuel ratio. It should be ensured that the amount of nitrous that the system is engineered to dispense does not exceed that which the intake system can flow. This prevents fuel “puddling” or distribution problems.
A further advantage of a ‘Wet’ system is that it lends itself to fine-tuning. By adjusting the fuel pressure and fuel orifice, either up or down from the baseline, the system's performance can be further improved. In addition, on a direct-port nitrous system each cylinder can be fine tuned to optimize performance and overcome rich or lean cylinders that the engine may have naturally aspirated.
The internal-combustion engine is basically a large air pump and its ability to pump air is one of the factors, which determine how much power it can produce. Air contains oxygen and by drawing more oxygen into the combustion chamber, more power will be produced. In order to achieve efficient combustion, the air needs to be mixed with fuel in the correct ratio. The stoichiometric (chemically correct) ratio is for basic gasoline is 14.7 parts air to 1 part of fuel.

Working of Nitrous :
A nitrous system has many benefits. There are two main types of nitrous kits: direct-port nozzle and plate kits and both feature very high quality and sound engineering.
1. Direct port nozzle system:
The most advanced device in nitrous nozzle technology is power wing nozzle. With its unique wing-tip shape, it not only produces a low-pressure area on the trailing edge of the nozzle that improves atomization, it also provokes less obstruction and turbulence in the air intake tract. The internal passages have unrestricted flow yet, remarkably, remain freeze-free at all horsepower levels.

Conventional Nozzle Power Wing™ Nozzle
1. Straight delivery of nitrous.
2. Straight delivery of fuel.
3. The aero shape of the power wing nozzle reduces the turbulence in the intake tract, leading to better atomization of fuel and nitrous to develop more power.


The plate systems:
The carburettor-style kits feature the Billet Atomizer TM plate which differs from other plates in two ways. One, the inlet fitting is engineered to prevent the nitrous from expanding. Expansion leads to freezing and a consequent reduction in flow. Two, the fitting is designed to eliminate turbulence of the nitrous which can also reduce the rate of flow. Obviously, any loss of flow results in a subsequent loss of power. The spray bars in the Billet Atomizer(tm) plate also feature symmetrical clusters of multiple holes - designed for improved atomization.
When comparing the costs of tuning an internal combustion engine, Nitrous oxide offers more power-per-dollar than all known alternatives. It has appreciably more than a turbocharger or blower and superior to a new set of cylinder heads or a different camshaft.
Another great advantage of installing nitrous oxide is its ability to provide instant power when it’s needed. Negotiating a high-horsepower engine through city traffic is usually not regarded as the most pleasant motoring experience.
Tuning with nitrous also provides the potential to increase power levels. By purchasing an adjustable kit, more power can be added, assuming the vehicle’s engine, transmission and driveline are up to the task. Its simply a matter of changing jets.
Installing a nitrous system is reasonably straight forward, when compared to other horsepower improving modifications. And, unlike cylinder heads and cams etc., the system can always be transferred from vehicle to vehicle.

Atomic Battery

Definition
A burgeoning need exists today for small, compact, reliable, lightweight and self-contained rugged power supplies to provide electrical power in such applications as electric automobiles, homes, industrial, agricultural, recreational, remote monitoring systems, spacecraft and deep-sea probes. Radar, advanced communication satellites and especially high technology weapon platforms will require much larger power source than today’s power systems can deliver. For the very high power applications, nuclear reactors appear to be the answer. However, for intermediate power range, 10 to 100 kilowatts (kW), the nuclear reactor presents formidable technical problems.

Because of the short and unpredictable lifespan of chemical batteries, however, regular replacements would be required to keep these devices humming. Also, enough chemical fuel to provide 100 kW for any significant period of time would be too heavy and bulky for practical use. Fuel cells and solar cells require little maintenance, and the latter need plenty of sun.

Thus the demand to exploit the radioactive energy has become inevitably high. Several methods have been developed for conversion of radioactive energy released during the decay of natural radioactive elements into electrical energy. A grapefruit-sized radioisotope thermo- electric generator that utilized heat produced from alpha particles emitted as plutonium-238 decay was developed during the early 1950’s.

Since then the nuclear has taken a significant consideration in the energy source of future. Also, with the advancement of the technology the requirement for the lasting energy sources has been increased to a great extent. The solution to the long term energy source is, of course, the nuclear batteries with a life span measured in decades and has the potential to be nearly 200 times more efficient than the currently used ordinary batteries. These incredibly long-lasting batteries are still in the theoretical and developmental stage of existence, but they promise to provide clean, safe, almost endless energy. 
Betavoltaics :
Betavoltacis is an alternative energy technology that promises vastly extended battery life and power density over current technologies. Betavoltaics are generators of electrical current, ineffect a form of a battery, which use energy from a radioactive source emitting beta particles (electrons). The functioning of a betavoltaics device is somewhat similar to a solar panel, which converts photons (light) into electric current.
Betavoltaic technique uses a silicon wafer to capture electrons emitted by a radioactive gas, such as tritium. It is similar to the mechanics of converting sunlight into electricity in a solar panel. The flat silicon wafer is coated with a diode material to create a potential barrier. The radition absorbed in the vicinity of and potiential barrier like a p-n junction or a metal-semiconductor contact would generate separate electron-hole pairs which inturn flow in an electric circuit due to the voltaic effect. Of course, this occurs to a varying degree in different materials and geometries.
A pictorial representation of a basic Betavoltaic conversion as shown in figure 1. Electrode A (P-region) has a positive potential while electrode B (N-region) is negative with the potential difference provided by me conventional means.
Figure 1
The junction between the two electrodes is comprised of a suitably ionisable medium exposed to decay particles emitted from a radioactive source.
The energy conversion mechanism for this arrangement involves energy flow in different stages:
Stage 1:- Before the radioactive source is introduced, a difference in potential between to electrodes is provided by a conventional means. An electric load R L is connected across the electrodes A and B. Although a potential difference exists, no current flows through the load R L because the electrical forces are in equilibrium and no energy comes out of the system. We shall call this ground state E 0 .
Stage 2:- Next, we introduce the radioactive source, say a beta emitter, to the system. Now, the energy of the beta particle E b generates electron- hole pair in the junction by imparting kinetic energy which knocks electrons out of the neutral atoms. This amount of energy E 1 , is known as the ionization potential of the junction. 
Stage 3:- Further the beta particle imparts an amount of energy in excess of ionization potential. This additional energy raises the electron energy to an elevated level E 2. Of course the beta [particle dose not impart its energy to a single ion pair, but a single beta particle will generate as many as thousands of electron- hole pairs. The total number of ions per unit volume of the junction is dependent upon the junction material.
Stage 4:- next, the electric field present in the junction acts on the ions and drives the electrons into electrode A. the electrons collected in electrode A together with the electron deficiency of electrode B establishes Fermi voltage between the electrodes. Naturally, the electrons in electrode A seek to give up their energy and go back to their ground state (law of entropy).
Stage 5:- the Fermi voltage derives electrons from the electrode A through the load where they give up their energy in accordance with conventional electrical theory. A voltage drop occurs across the load as the electrons give an amount of energy E 3. Then the amount of energy available to be removed from the system is
E 3 = E b - E 1 - L 1 -L 2
Where L 1 is the converter loss and L 2 is the loss in the electrical circuit.
Stage 6:- the electrons, after passing to the load have an amount of energy E 4 .from the load, the electrons are then driven into the electrode B where it is allowed to recombine with a junction ion, releasing the recombination energy E 4 in the form of heat this completes the circuit and the electron has returned to its original ground state.
The end result is that the radioactive source acts as a constant current generator. Then the energy balance equation can be written as
E 0 =E b -E 1 -E 3 -L 1 -L 2
Until now betavoltaics has been unable to match solar-cell efficiency. The reason is simple: when the gas decays, its electrons shoot out in all directions. Many of them are lost. A new Betavoltaic device using porous silicone diodes was proposed to increase their efficiency. The flat silicon surface, where the electrons are captured and converted to a current, and turned into a 3- dimensional surface by adding deep pits. Each pit is about 1 micron wide. That is four hundred-thousandths of an inch. They are more than 40 microns deep. When the radioactive gas occupies these pits, it creates the maximum opportunity for harnessing the reaction.

Optoelectrics:
An optoelectric nuclear battery has been proposed by researchers of the kurchatov institute in Moscow. A beta emitter such as technetium-99 are strontium-90 is suspended in a gas or liquid containing luminescent gas molecules of the exciter type, constituting “dust plasma”. This permits a nearly lossless emission of beta electrons from the emitting dust particles for excitation of the gases whose exciter line is selected for the conversion of the radioactivity into a surrounding photovoltaic layer such that a comparably light weight low pressure, high efficiency battery can be realized. These nuclides are low cost radioactive of nuclear power reactors. The diameter of the dust particles is so small (few micrometers) that the electrons from the beta decay leave the dust particles nearly without loss. The surrounding weakly ionized plasma consists of gases or gas mixtures (e.g. krypton, argon, xenon) with exciter lines, such that a considerable amount of the energy of the beta electrons is converted into this light the surrounding walls contain photovoltaic layers with wide forbidden zones as egg. Diamond which converts the optical energy generated from the radiation into electric energy.
The battery would consist of an exciter of argon, xenon, or krypton (or a mixture of two or three of them) in a pressure vessel with an internal mirrored surface, finely-ground radioisotope and an intermittent ultrasonic stirrer, illuminating photocell with a band gap tuned for the exciter. When the electrons of the beta active nuclides (e.g. krypton-85 or argon-39) are excited, in the narrow exciter band at a minimum thermal losses, the radiations so obtained is converted into electricity in a high band gap photovoltaic layer (e.g. in a p-n diode) very efficiently the electric power per weight compared with existing radionuclide batteries can then be increased by a factor 10 to 50 and more. If the pressure-vessel is carbon fiber / epoxy the weight to power ratio is said to be comparable to an air breathing engine with fuel tanks. The advantage of this design is that precision electrode assemblies are not needed and most beta particles escape the finely-divided bulk material to contribute to the batteries net power. The disadvantage consists in the high price of the radionuclide and in the high pressure of upto 10MPa (100bar) and more for the gas that requires an expensive and heavy container. 

Blended Wing Body

Definition

Blended Wing Body (BWB) aircraft have a flattened and airfoil shaped body, which produces most of the lift, the wings contributing the balance. The body form is composed of distinct and separate wing structures, though the wings are smoothly blended into the body. For more visit- www.akfunworld.co.nr Page 2 Blended Wing Body By way of contrast, flying wing designs are defined as a tailless fixed-wing aircraft which has no definite fuselage, with most of the crew, payload and equipment being housed inside the main wing structure. Blended wing body has lift-to-drag ratio 50% greater than conventional airplane. Thus BWB incorporates design features from both a futuristic fuselage and flying wing design. The purported advantages of the BWB approach are efficient high-lift wings and a wide airfoil- shaped body. This enables the entire craft to contribute to lift generation with the result of potentially increased fuel economy and range.
A flying wing is a type of tail-less aircraft design and has been known since the early days of aviation. Since a wing is necessary of any aircraft, removing everything else, like the tail and fuselages, results in a design with the lowest possible drag. Successful applications of this configuration are for example the H-09 and later H-0229 developed by Horton Brothers for Nazi’s during 1942. Latter Northrop started designing flying such as NIM in 1942 and later XB- 35 Bomber which flew first in 1942. In 1988, when NASA Langley Research Centre’s Dennis Bushnell asked the question: “Is there a renaissance for the long haul transport?” there was cause for reaction. In response, a brief preliminary design study was conducted at McDonnell Douglas to create and evaluate alternate configurations. A preliminary configuration concept, shown in Fig. 1.4, was the result.
Here, the pressurized passenger compartment consisted of adjacent parallel tubes, a lateral extension of the double-bubble concept. Comparison with a conventional configuration airplane sized for the same design mission indicated that the blended configuration was significantly lighter, had a higher lift to drag ratio, and had a substantially lower fuel burn. In modern era after B-2 Bomber (1989) blended wing body was used for stealth operations. The unmanned combat air vehicle (UCAV) named X-47 in year 2003 was subjected to test flights. Flight test began on 20th July and the first flight reached an altitude of 7500 feet MSL (2286 m) and lasted for 31 min. On 4th September first remotely piloted aircraft was stalled. Latest being the NASA and Boeing successfully completed initial flight testing of Boeing X-48B on March 19, 2010. The Blended Wing Body (BWB) is the relatively new aircraft concept that has potential use as a commercial or military use aircraft, cargo delivery or as fuel tanker. 
Formulation Of BWB Concept :
NASA Langley Research Centre funded a small study at McDonnell Douglas to develop and compare advanced technology subsonic Transports for the design mission of 800 passengers and a 7000-n mile range at a Mach number of 0.85.Composite structure and advanced technology turbofans were utilized. Defining the pressurized passenger cabin for a very large airplane offers two challenges.
First, the square-cube law* shows that the cabin surface area per passenger available for emergency egress decreases with increasing passenger count. Second, cabin pressure loads are most efficiently taken in hoop tension. Thus, the early study began with an attempt to use circular cylinders for the fuselage pressure vessel, along with the corresponding first cut at the airplane geometry. The engines are buried in the wing root, and it was intended that passengers could egress from the sides of both the upper and lower levels. Clearly, the concept was headed back to a conventional tube and wing configuration.
Therefore, it was decided to abandon the requirement for taking pressure loads in hoop tension and to assume that an alternate efficient structural concept could be developed. Removal of this constraint became pivotal for the development of the BWB. Passenger cabin definition became the origin of the design, with the hoop tension structural requirement deleted. Three canonical forms shown in Fig 2.1a, each sized to hold 800 passengers were considered. The sphere has minimum surface area; however, it is not stream lined. Two canonical stream lined options include the conventional cylinder and a disk, both of which have nearly equivalent surface area. Next, each of these fuselages is placed on a wing that has a total surface area of 1393.54 sq-m. Now the effective masking of the wing by the disk fuselage results in a reduction of total aerodynamic wetted area of 650 sq-m compared to the cylindrical fuselage plus wing geometry, as shown in Fig 2.1b.
Next, adding engines (Fig 2.1c) provides a difference in total wetted area of 947.6 sq-m. (Weight and balance require that the engines be located aft on the disk configuration.) Finally, adding the required control surfaces to each configuration as shown in Fig 2.1d results in a total wetted area difference of 1328.5 sq-m, or a reduction of 33%. Because the cruise lift to drag For more visit- www.akfunworld.co.nr Page 6 Blended Wing Body ratio is related to the wetted area aspect ratio, the BWB configuration implied a substantial improvement in aerodynamic efficiency.

Aerodynamics:
Some insight into the aerodynamic design of the BWB is provided in Fig 4.2, where the trade between wing chord, thickness, and lift coefficient is shown. The outboard wing is moderately loaded, similar to a conventional configuration, where drag is minimized with a balance between the wetted area and shock strength. Moving inboard, the centerbody, with its very large chord, calls for correspondingly lower section lift coefficients to maintain an elliptic span load. The low section lift requirement allows the very thick airfoils for packaging the passenger compartment and trailing-edge reflex for pitch trim.
Navier–Stokes computational fluid dynamics (CFD) methodology in both the inverse design and direct solution modes was employed to define the final BWB geometry. The typical shock on the outboard wing is smeared into a compression wave on the centerbody. The flow pattern on the centerbody remained essentially invariant with angle of attack, and flow separation is initiated in the kink region between the outboard wing and the centerbody. Outer wing flow remains attached, providing lateral control into the stall regime. Similarly, the flow over the centerbody remains attached and provides a nearly constant flow environment for the engine inlets. This flow behaviour is a consequence of significant lateral flow on the centerbody that provides a three-dimensional relief of compressibility effects. However, the relief on the centerbody is traded for a transonically stressed flow environment in the kink region. This is the ideal span wise location for the stall to begin, from a flight mechanics point of view: The ailerons remain effective, and pitch-up is avoided.

Ceramic Disc Brakes

Definition

Until now brake discs have been made up of grey cast iron, but these are heavy which reduces acceleration, uses more fuel and has a high gyroscopic effect.
Ceramic disc brake weigh less than carbon/carbon discs but have the same frictional values with more initial bite and cost a fraction of price. Carbon /carbon discs are used only in Formula 1 racing cars etc, because it is so expensive. More over ceramic brake discs are good even in wet conditions which carbon / carbon disc notoriously fails to do.
But comparing their weight, you will see right away that we are looking at two different worlds, with ceramic brake discs more than 61 per cent lighter than conventional cast iron discs. In practice this reduces the weight of the car, depending on the size of the brake discs, by up to 20 kg. And apart from saving fuel, resulting in better and lower emission for the same mileage, this also means a reduction in unsprung masses with a further improvement of shock absorber response and behavior. Another is the manufacturer can add more safety features without adding to current weight.
The ceramic material is created when the matrix carbon combines with liquid silicon. This fiber reinforced ceramic material cools over night and the gleaming dark grey break disk is ready. Resin is a binder, which holds the different constituents together.
Resins are of two types :
1.Thermosetting resins
2. Thermoplastic resins.
Thermoplastic resins are those, which can be softened on heating harden on cooling. Repeated heating and cooling does not affect their chemical nature of materials. These are formed by addition polymerization and have long chain molecular structure.
Thermosetting resins are those resins which, during molding process (by heating) get hardened and once they have solidified, they cannot be softened i.e. they are permanent setting resins. Such resins during moldings, acquire three dimensional cross linked structure with predominantly strong covalent bonds. They are formed by condensation polymerization and are stronger and harder than thermoplastic resins. They are hard, rigid, water resistant and scratch resistant.
Coating Of Ceramics On Conventional Brake Disc:
Earlier brake disc have been made of grey cast iron, but these are heavy which reduces acceleration, uses more fuel, etc. The new technology developed by Freno Ltd uses metal matrix composite for the disk, basically an alloy of aluminum for lightness and silicon carbide for strength. However it was found that, the ceramic additive made the disk highly abrasive and gave a low and unstable coefficient of friction. So it was realized that the surface had to be engineered in some way to overcome this problem. After experiments, Sulzer Metco Ltd found an answer in the form of a special ceramic coating. They developed thermal spray technology as well as manufacturing plasma surface engineering machinery used for the task and coating materials.
In use, the ceramic face requires a special carbon metallic friction pad, which deposits a layer of material on the brake disc. This coupling provides the required conditions of exceptional wear resistance, high and stable coefficient of friction.
The coated matrix composite discs were first used on high performance motor cycles, where the reduced gyroscopic effect had the additional advantage of making the cycles easier to turn.
Another company named Lanxide used aluminium as the disc material. To provide necessary abrasion resistance, aluminium discs have to be reinforced with a ceramic material, hence metal composite. They used silicon carbide also to increase the strength.

Porsche Ceramic Disc Brakes (PCCB):
After a long period of research and tests Porsche has developed new high performance disc brakes, P C C B (Porsche Ceramic Composite Brakes). Porsche has succeeded as the first car manufacturer in the world to develop ceramic brake discs with involute cooling ducts for an efficient cooling. The new brake system offers a substantial improvement in the car braking technology and sets entirely new standards in terms of decisive criteria such as braking response, fading stability, and weight and service life.
PORSCHE CERAMIC COMPOSITE BRAKE
Porsche's new brake system also offers obvious advantages in emergencies at low speeds: In such a case emergency application of the brakes with PCCB technology does not require substantial pedal forces or any technical assistance serving to build up maximum brake forces within fractions of a second. Instead, the Porsche Ceramic Composite Brake ensures maximum deceleration from the start without requiring any particular pressure on the brake pedal. And the new brake system is just as superior in its response under wet conditions, since the new brake linings cannot absorb water in the same way as conventional linings. The final point, of course, is that the cross-drilled brake discs help to optimize the response of the brakes also in wet weather.
The process involves heating carbon powder, resin and carbon fibers in a furnace to about 1700 degree Celsius and is a high vaccum process.
1. Ceramic brake discs are 50% lighter than metal brake discs. As a result, they can reduce the weight of car by up to 20kg. In case of a high speed ICE like train with 36 brake discs, these savings amount to 6 tons. And apart from saving fuel, this also means a reduction in unsprung masses with a further improvement of shock absorber response and behavior.

2. The ceramic brake disc ensures very high and, in particular, consistent frictional values throughout the entire deceleration process. With Porsche ceramic brake discs, a car was able to decelerate from 100Km to 0Km in less than 3 seconds. In the case of Daewoo’s Nexia, it takes about 4 seconds to stop the vehicle.

Cryogenic Hardening

Definition

A cryogenic hardening is the process of treating work pieces to cryogenictemperatures(below −150°C, −238°F or 123K) to removeresidual stresses and improvewear resistance on steels by transforming all the austenite into martensite. In the past toolmakers would bury components in snow banks for weeks or even months to improve wear resistance. Castings were always left outside in the cold for months or years to age and stabilize. Swiss watchmakers noticedthat extreme cold changed the properties oftheir metal clock parts for the better. Theywould store them in cold caves and let themfreeze during the winter.Because of the secret use of cold treating metals and the resulting increase in watch quality lifted the Swiss watch making to mystic levels.
1930s German records tell of aircraft engine manufacturers testing cryogenics on their products with some success. During World War II American bomber manufacturers used this method of cold tempering to stress relieve aluminum superstructures. This allowed the airplanes to be made from thinner materials of lesser weight. This allowed airplanes to carry heavier ammunition and bomb loads. Increasing the bomb load dramatically increased the effectiveness of the airplanes.
Today cryogenic tempering is used to some degree in many industries. Its positive effects are not just limited to metals. They include nylons and other plastics, lighting, high voltage/amperage electrical systems, soldered connections, computer memory, circuit boards and components, well drilling, machining processes, casting and forging, ceramics, farming, transportation fleets, construction, excavation, etc
Processes in Cryogenic Hardening:
1. Lowering the temperature of the object (RAMP DOWN).
2. Holding the temperature low(SOAK).
3. Bringing the temperature back up to room temperature (RAMP UP).
4. Elevating the temperature to above ambient (TEMPER RAMP UP).
5. Holding the elevated temperature for a specific time(TEMPER HOLD).



Ramp Down:
A typical cryogenic cycle will bring the temperature of the part down to -300°F over a period of six to ten hours. This avoids thermally shocking the part. There is ample reason for the slow ramp down. Assume if an object is dropped in a vat of liquid nitrogen. The outside of the object wants to become the same temperature as the liquid nitrogen, which is near -323°F. The inside wants to remain at room temperature. This sets up a temperature gradient that is very steep in the first moments of the parts exposure to the liquid nitrogen. The area that is cold wants to contract to the size it would be if it were as cold as the liquid nitrogen. The inside wants to stay the same size it was when it was room temperature. This can set up enormous stresses in the surface of the part, which can lead to cracking at the surface. Some metals can take the sudden temperature change, but most tooling steels and steels used for critical parts cannot.
SOAK: Holding the temperature low
A typical soak segment will hold the temperature at -320 ° F for some period of time, typically eight to forty hours. During the soak segment of the process the temperature is maintained at the low temperature. Although things are changing within the crystal structure of the metal at this temperature, these changes are relatively slow and need time to occur. One of the changes is the precipitation of fine carbides. In theory a perfect crystal lattice structure is in a lowest energy state. If atoms are too near other atoms or too far from other atoms, or if there are vacancies in the structure or dislocations, the total energy in the structure is higher. By keeping the part at a low temperature for a long period of time, we believe we are getting some of the energy out of the lattice and making a more perfect andtherefore stronger crystal structure
RAMP UP : Bringing the temperature back up to room temperature
A typical ramp up segment brings the temperature back up to room temperature. This can typically take eight to twenty hours. The ramp up cycle is very important to the process. Ramping up too fast can cause problems with the part being treated. Think in terms of dropping an ice cube into a glass of warm water. The ice cube will crack. The same can happen.
TEMPER RAMP UP : Elevating the temperature to above ambient
A typical temper segment ramps the temperature up to a predetermined level over a period of time. Tempering is important with ferrous metals. The cryogenic temperature will convert almost all retained austenite in a part to martensite. This martensite will be primary martensite, which will be brittle. It must be tempered back to reduce this brittleness. This is done by using the same type of tempering process as is used in a quench and temper cycle in heat treat. We ramp up in temperature to assure the temperature gradients within the part are kept low. Typically, tempering temperatures are from 300 ° F on up to 1100 ° F, depending on the metal and the hardness of the metal.
TEMPER HOLD: Holding the elevated temperature for a specific time.
The temper hold segment assures the entire part has had the benefit of the tempering temperatures. A typical temper hold time is about 3 hours. This time depends on the thickness and mass of the part. There may be more than one temper sequence for a given part or metal. We have found that certain metals perform better if tempered several times.
Fine Carbide Precipitation
Cryogenic hardening of high alloysteels, such as tool steel, results in the formation of very small carbide particlesdispersed in the martensite structure between the larger carbide particles present inthe steel. The small & hard carbide particleswithin the martensitic matrix help support the matrix and resist penetration byforeign particles in abrasion wear.
The large improvements in tool life usually are attributed to thisdispersion of carbides in conjunction with retained austenite transformation. . Thiscryogenic processing step causes irreversible changes in the microstructure of thematerials, which significantly improve the performance of the materials
Cryogenic hardening of alloy steels causes transformation of retained austenite tomartensite. Eta carbide precipitates in thematrix of freshly formed martensite during the tempering process. This Eta carbideformation favors more stable, harder, wear-resistant and tougher material. Thisstrengthens the material without appreciably changing the hardness.

Disk Brake

Definition
Ever since the invention of the wheel, if there has been "go" there has been a need for "whoa." As the level of technology of human transportation has increased, the mechanical devices used to slow down and stop vehicles has also become more complex. In this report I will discuss the history of vehicular braking technology and possible future developments. Before there was a "horse-less carriage," wagons, and other animal drawn vehicles relied on the animal’s power to both accelerate and decelerate the vehicle. Eventually there was the development of supplemental braking systems consisting of a hand lever to push a wooden friction pad directly against the metal tread of the wheels. In wet conditions these crude brakes would lose any effectiveness.

The early years of automotive development were an interesting time for the designing engineers, "a period of innovation when there was no established practice and virtually all ideas were new ones and worth trying. Quite rapidly, however, the design of many components stabilized in concept and so it was with brakes; the majority of vehicles soon adopted drum brakes, each consisting of two shoes which could be expanded inside a drum."
In this chaotic era is the first record of the disk brake. Dr. F.W. Lanchester patented a design for a disk brake in 1902 in England. It was incorporated into the Lanchester car produced between 1906 through 1914. These early disk brakes were not as effective at stopping as the contemporary drum brakes of that time and were soon forgotten. Another important development occurred in the 1920’s when drum brakes were used at all four wheels instead of a single brake to halt only the back axle and wheels such as on the Ford model T. The disk brake was again utilized during World War II in the landing gear of aircraft. The aircraft disk brake system was adapted for use in automotive applications, first in racing in 1952, then in production automobiles in 1956. United States auto manufacturers did not start to incorporate disk brakes in lower priced non-high-performance cars until the late 1960’s.
Advantages of Disc Brakes over Drum Brakes:
As with almost any artifact of technology, drum brakes and disk brakes both have advantages and disadvantages. Drum brakes still have the edge in cheaper cost and lower complexity. This is why most cars built today use disk brakes in front but drum brakes in the back wheels, four wheel disks being an extra cost option or shouted as a high performance feature. Since the weight shift of a decelerating car puts most of the load on the front wheels, the usage of disk brakes on only the front wheels is accepted manufacturing practice.

Drum brakes had another advantage compared to early disk brake systems. The geometry of the brake shoes inside the drums can be designed for a mechanical self-boosting action. The rotation of the brake drum will push a leading shoe brake pad into pressing harder against the drum. Early disk brake systems required an outside mechanical brake booster such as a vacuum assist or hydraulic pump to generate the pressure for primitive friction materials to apply the necessary braking force.

All friction braking technology uses the process of converting the kinetic energy of a vehicle’s forward motion into thermal energy: heat. The enemy of all braking systems is excessive heat. Drums are inferior to disks in dissipating excessive heat:

"The common automotive drum brake consists essentially of two shoes which may be expanded against the inner cylindrical surface of a drum.

The greater part of heat generated when a brake is applied has to pass through the drum to its outer surface in order to be dissipated to atmosphere, and at the same time (the drum is) subject to quite severe stresses due to the distortion induced by the opposed shoes acting inside the open ended drum.
The conventional disk brake, on the other hand, consists essentially of a flat disk on either side of which are friction pads; equal and opposite forces may be applied to these pads to press their working surfaces into contact with the braking path of the disks. The heat produced by the conversion of energy is dissipated directly from the surfaces at which it is generated and the deflection of the braking path of the disk is very small so that the stressing of the material is not so severe as with the drum."
The result of overheated brakes is brake fade...the same amount of force at the pedal no longer provides the same amount of stopping power. The high heat decreases the relative coefficient of friction between the friction material and the drum or disk. Drum brakes also suffer another setback when overheating: The inside radii of the drum expands, the brake shoe outside radii no longer matches, and the actual contact surface is decreased.

Another advantage of disk brakes over drum brakes is that of weight. There are two different areas where minimizing weight is important. The first is unsprung weight. This is the total amount of weight of all the moving components of a car between the road and the suspension mounting points on the car’s frame.
Auto designs have gone to such lengths to reduce unsprung weight that some, such as the E-type Jaguar, moved the rear brakes inboard, next to the differential, connected to the drive shafts instead of on the rear wheel hubs. The second "weighty" factor is more of an issue on motorcycles: gyroscopic weight. The heavier the wheel unit, the more gyroscopic resistance to changing direction. Thus the bike’s steering would be higher effort with heavier drum brakes than with lighter disks. Modern race car disk brakes have hollow internal vents, cross drilling and other weight saving and cooling features.

Most early brake drums and disks were made out of cast iron. Current OEM motorcycle disk brakes are usually stainless steel for corrosion resistance, but after-market racing component brake disks are still made from cast iron for the improved friction qualities. Other exotic materials have been used in racing applications. Carbon fiber composite disks gripped by carbon fiber pads were common in formula one motorcycles and cars in the early 1990’s, but were outlawed by the respective racing sanctioning organizations due to sometimes spectacular failure. The carbon/carbon brakes also only worked properly at the very high temperatures of racing conditions and would not get hot enough to work in street applications.

A recent Ducati concept show bike uses brake disks of silesium, developed by the Russian aerospace industry(3), which claim to have the friction coefficient of cast iron with the light weight of carbon fiber.
Working of Disc Brakes
The caliper is the part that holds the break shoes on each side of the disk. In the floating-caliper brake, two steel guide pins are threaded into the steering-knuckle adapter. The caliper floats on four rubber bushings which fit on the inner and outer ends of the two guide pins. The bushings allow the caliper to swing in or out slightly when the brakes are applied
When the brakes are applied, the brake fluid flows to the cylinder in the caliper and pushes the piston out. The piston then forces the shoe against the disk. At the same time, the pressure in the cylinder causes the caliper to pivot inward. This movement brings the other shoe into tight contact with the disk. As a result, the two shoes "pinch" the disk tightly to produce the braking action

STAGES OF WORKING
The sliding-caliper disk brake is similar to the floating-caliper disk brake. The difference is that sliding-caliper is suspended from rubber bushings on bolts. This permits the caliper to slide on the bolts when the brakes are applied.

Proper function of the brake depends on (1) the rotor must be straight and smooth, (2) the caliper mechanism must be properly aligned with the rotor, (3) the pads must be positioned correctly, (4) there must be enough "pad" left, and (5) the lever mechanism must push the pads tightly against the rotor, with "lever" to spare. Most modern cars have disc brakes on the front wheels, and some have disc brakes on all four wheels. This is the part of the brake system that does the actual work of stopping the car The most common type of disc brake on modern cars is the single-piston floating caliper. In this article, we will learn all about this type of disc brake design