Wednesday, 5 February 2014

Ejection Seat

Almost since the first days of flight man has been concerned with the safe escape from an aircraft which was no longer flyable. Early escape equipment consisted of a recovery parachute only. As aircraft performance rapidly increased during World War II, it became necessary to assist the crewmen in gaining clear safe separation from the aircraft. This was accomplished with an ejector seat which was powered by a propellant driven catapult - the first use of a propulsive element in aircrew escape. . Since then, this collection of componentry has evolved through several generations into today's relatively complex systems which are highly dependent upon propulsive elements. Ejection seats are one of the most complex pieces of equipment on any aircraft, and some consist of thousands of parts. The purpose of the ejection seat is simple: To lift the pilot straight out of the aircraft to a safe distance, then deploy a parachute to allow the pilot to land safely on the ground.
The first operational use of a propulsive element to assist an aircrew to escape from an aircraft apparently occurred during World War II. It appears that the country to receive credit for the first operational system was Germany, as it is known that approximately 60 successful ejections were made from German aircraft during World War II. It is interesting to note, however, that the first aircraft ejection seat was designed and tested (successfully) with a dummy in 1910 by J. S. Zerbe in Los Angeles, California. This was one year before the first parachutist successfully, jumped from an aircraft. Another country involved in early ejection seat work was Sweden. Initial experiments were made by SAAB in 1942 using propellant powered seats. The first successful dummy in-flight ejection was on 8 January 1942. A successful live ejection was made on 29 July 1946.At the end of World War II both the British and Americans acquired German and Swedish ejection seats and data. This information and equipment added impetus to their efforts. The first live flight test in England occurred on 24 July 1946 when Mr. Bernard Lynch ejected from a Meteor III aircraft at 320 mph IAS at 8,000 feet, using a prototype propellant powered seat. On 17 August 1946 First Sergeant Larry Lambert ejected from a P61B at 300 mph IAS at 7,800 feet to become the first live in-flight US ejection test.
Basic Components:
To understand how an ejection seat works, you must first be familiar with the basic components in any ejection system. Everything has to perform properly in a split second and in a specific sequence to save a pilot's life. If just one piece of critical equipment malfunctions, it could be fatal. Like any seat, the ejection seat's basic anatomy consists of the bucket, back and headrest. Everything else is built around these main components.
Here are key devices of an ejection seat:

• Catapult
• Rocket
• Restraints
• Parachute

This early propulsive element has been called a gun or catapult and, is in essence, a closed telescoping tube arrangement containing a propellant charge to forcibly extend the tubes, thereby imparting the necessary separation velocity to the "ejector seat" and its contents .The rocket is a propulsive device in the seat. The catapult remained as the initial booster to get the seat/man mass clear of the cockpit, while the rocket motor came on line, once clear of the cockpit, to act in a sustainer mode. The restraint system for the crue member is the protective devices to avoid injury while ejecting the seat. Harness straps can be tightened and body position can be adjusted to reduce injury from the forces encountered during ejection. Leg lifting devices and arm and leg restraints are provided to prevent limb flail injuries due to windblast forces. The limb restraints do not require the crew to hook up as they enter the aircraft and do not restrict limb movement during normal flight operations. Parachute helped the pilot to land safely on ground.
The Aces II Ejection Seat
The Advanced Concept Ejection Seat (ACES) was designed to be rugged and lightweight compared to earlier systems. It also was designed to be easy to maintain and updatable.

It includes the following features:
• Electronic Sequencing and timing
• Auto sensing of egress conditions
• Parachute reefing to control opening at all speed ranges
• Multi-Mode operation for optimum recovery of the crewman


The ACES II is a third-generation seat, capable of ejecting a pilot from zero-zero conditions up to maximum altitude and airspeeds in the 250 knots (288 mph / 463 kph) range. The peak catapult acceleration is about 12gz. The ACES II has three main operating modes, one each for the low speed/low altitude, medium speed, and high speed/high altitude.

• Mode 1: low altitude, low speed - Mode 1 is for ejections at speeds of less than 250 knots (288 mph / 463 kph) and altitudes of less than 15,000 feet (4,572 meters). The drogue parachute doesn't deploy in mode 1.

• Mode 2: low altitude, high speed - Mode 2 is for ejections at speeds of more than 250 knots and altitudes of less than 15,000 feet.

• Mode 3: high altitude, any speed - Mode 3 is selected for any ejection at an altitude greater than 15,000 feet.
Deployment is delayed by the sequencer until the seat-man package reaches either Mode 2, or Mode 1 conditions, whichever comes first.

Seat modes are selected by the sequencer based on atmospheric conditions, and the modes vary depending on differences in the conditions such as apparent airspeed and apparent altitude.
The Advanced Concept Ejection Seat (ACES) was designed to be rugged and lightweight compared to earlier systems. It also was designed to be easy to maintain and updatable

Electro-Hydraulic Brake

Definition
Electro-Hydraulic Brake (EHB) System is a system which senses the driver's will of braking through the pedal simulator and controls the braking pressures to each wheels. The system is also a hydraulic Brake by Wire system.Many of the vehicle sub-systems in today’s modern vehicles are being converted into “by-wire” type systems. This normally implies a function, which in the past was activated directly through a purely mechanical device, is now implemented through electro-mechanical means by way of signal transfer to and from an Electronic Control Unit. Optionally, the ECU may apply additional “intelligence” based upon input from other sensors outside of the driver’s influence. Electro-Hydraulic Brake is not a true “by-wire” system with the thought process that the physical wires do not extend all the way to the wheel brakes. However, in the true sense of the definition, any EHB vehicle may be braked with an electrical “joystick” completely independent of the traditional brake pedal. It just so happens that hydraulic fluid is used to transmit energy from the actuator to the wheel brakes.
This configuration offers the distinct advantage that the current production wheel brakes may be maintained while an integral, manually applied, hydraulic failsafe backup system may be directly incorporated in the EHB system. The cost and complexity of this approach typically compares favorably to an Electro-Mechanical Brake (EMB) system, which requires significant investment in vehicle electrical failsafe architecture, with some needing a 42 volt power source. Therefore, EHB may be classified a “stepping stone” technology to full Electro-Mechanical Brakes
A base brake event can be described as a normal or typical stop in which the driver maintains the vehicle in its intended direction at a controlled deceleration level that does not closely approach wheel lock. All other braking events where additional intervention may be necessary, such as wheel brake pressure control to prevent lock-up, application of a wheel brake to transfer torque across an open differential, or application of an induced torque to one or two selected wheels to correct an under- or over steering condition, may be classified as controlled brake performance. Statistics from the field indicate the majority of braking events stem from base brake applications and as such can be classified as the single most important function. From this perspective, it can be of interest to compare modern-day Electro-Hydraulic Brake (EHB) hydraulic systems with a conventional vacuum-boosted brake apply system and note the various design options used to achieve performance and reliability objectives.
Single Channel Complexity Comparison for Base Brakes:
The conventional system utilizes a largely mechanical link all the way from the brake pedal through the vacuum booster and into the master cylinder piston. Proportional assist is provided by an air valve acting in conjunction with the booster diaphragm to utilize the stored vacuum energy. The piston and seal trap brake fluid and transmit the hydraulic energy to the wheel brake.
Compare this to the basic layout of the typical EHB system. First, the driver’s input is normally interpreted by up to three different devices: a brake switch, a travel sensor, and a pressure sensor while an emulator provides the normal pedal “feel”. To prevent unwanted brake applications, two of the three inputs must be detected to initiate base brake pressure. The backup master cylinder is subsequently locked out of the main wheel circuit using isolation solenoid valves, so all wheel brake pressure must come from a high-pressure accumulator source. This stored energy is created by pressurizing brake fluid from the reservoir with an electro-hydraulic pump into a suitable pre-charged vessel. The accumulator pressure is regulated by a separate pressure sensor or other device. The “by-wire” characteristics now come into play as the driver’s braking intent signals are sent to the ECU. Here an algorithm translates the dynamically changing voltage input signals into the corresponding solenoid valve driver output current waveforms.

As the apply and release valves open and close, a pressure sensor at each wheel continuously “closes the loop” by feeding back information to the ECU so the next series of current commands can be given to the solenoid valves to assure fast and accurate pressure response.
It is obvious the EHB system is significantly more complex in nature. To address this concern, numerous steps have been taken to eliminate the possibility of boost failure due to electronic or mechanical faults. In the ECU design, component redundancy is used throughout. This includes multiple wire feeds, multiple processors and internal circuit isolation for critical valve drivers. The extra components and the resulting software to control them, does add a small level of additional complexity in itself. Thermal robustness must also carefully be designed into the unit, as duty cycles for valves and motors will be higher than in add-on type system. Thus, careful attention must be given to heat sinking, materials, circuit designs, and component selection. Special consideration must be given to the ECU cover heat transfer properties, which could include the addition of cooling fins. On the mechanical side there is redundancy in valves and wheel brake sensors in that the vehicle may still be braked with two or three boosted channels. In regards to the E-H pump and accumulator, backup components are not typically considered practical from a size, mass, and cost viewpoint. However, these few components are extremely robust in nature and thoroughly tested to exceed durability requirements.
Similar to the days of early ABS introduction, multiple EHB hydraulic design configurations have emerged. From the mid 80’s through the latter part of the 1990’s numerous ABS configurations ranging from hydraulically boosted open systems, to four valve flow control designs, to modulators based upon ball screws and electric motors came to market before the 8-valve, closed recirculation system became the de facto standard. As with any new technology, there are concerns and tradeoffs to be dealt with. In the case of the electro-hydraulic brake they center around increased electrical and mechanical complexity, failsafe braking performance, accumulator safety, and 2-wheel versus 4-wheel backup modes. Each of these concerns has been answered by prudent designs and incorporation of new component technologies. The configuration adopted in Delphi’s EHB development has included use of four-wheel failsafe with individual isolation pistons and utilization of mechanical pedal feel lockout. This particular design allows system flexibility, inherent accumulator precharge isolation, and the ability to tune for optimum failed system stopping performance for all vehicle classes.

Ultimately, no matter which final configuration is selected for a specific vehicle platform, it will have to undergo the rigors of full brake system validation. A carefully de-signed and implemented EHB system holds the promise of enabling the new brake-by-wire features while still reliably performing the everyday task of stopping the vehicle.

Noise Control in IC Engine

Abstract
Noise control is becoming increasingly important for a wide variety of OEM designers. Examples of products that take noise control considerations into account during their design cycles include equipment such as computer hard drives, house appliances, material handling and transportation equipment etc,. In the transportation market, which includes aircraft, ground and marine segments, the demand is for low noise level goals. Achieving these goals is of primary importance for OEM to be continue to be competitive or to keep a given supremacy in the market. The automotive industry has been a leader in the adsorption of noise control technologies. Methods in use for several years for the prediction of interior noise levels include : finite element method(FEM), statistical energy analysis (SEA) boundary element analysis (BEA) etc. The internal combustion engine has mechanized the world. Since the early 1900s it has been our prime source of mechanical power. The vast number of internal combustion engines in the world today has resulted in air pollution, noise pollution etc.
There has been a direct relationship between the improvement in man’s physical standard of living and the degree of his development of machines. The industrial revolution was really a series of social and industrial transformations, beginning in England with the use of coal in place of charcoal for the smelting of iron, progressing through the stages of steam engines and electric motors and all the producing and processing made possible by these devices. of the age of gasoline, sea and air for various types of transportation. For that matter, sweeping mechanical progress witness automation and the utilization of nuclear energy; but with every new machine, a little noise is created, with every mechanism employed to do man’s work, some mechanical or electrical power is converted into acoustical power, so that with the rise of people’s standard of living there occurs also a rise in the noise level of people’s confines.
Internal Combustion Engine Noise:
One typical engine noise classification technique separates the aerodynamic noise, combustion noise and mechanical noise.
1. AERODYNAMIC NOISE
2. COMBUSTION NOISE
3. MECHANICAL NOISE
AERODYNAMIC NOISE- aerodynamic noise includes exhaust gas and intake air noise as well as noise generated by cooling fans, auxillary fans or any other air flow.
COMBUSTION NOISE- combustion noise refers to noise generated by the vibrating surfaces of the engine structure, engine components and engine accessories after excitation by combustion forces.
MECHANICAL NOISE- mechanical noise refers to noise generated by the vibrating surfaces of the engine components and engine accessories after excitation by reciprocating or rotating engine components.

EXHAUST SYSTEM NOISE: Exhaust system noise includes the noise from exhaust gas pulses leaves the muffler or tail pipe and noise emitted from the vibrating surfaces of the exhaust system components. Noise emitted from the surfaces of exhaust system components results from two different types of excitation forces: those generated by the pulsating exhaust gas flow and those transmitted from the vibrating engine to exhaust system components. Additional considerations in the reduction of exhaust system noise include proper selection of piping lengths and diameters, proper mounting of exhaust system components and proper positioning of the exhaust outlet.
INTAKE SYSTEM NOISE: Intake system noise includes noise generated by the flow of air through the systems air inlet and noise emitted from the vibrating surface components. As with exhaust systems surface radiated noise results from two different types of excitation process: those generated by the pulsating intake air flow and those transmitted from the vibrating engine to intake system components. In many instances, an engines air cleaner will provide significant attenuation of intake air noise. If additional attenuation is required, an intake air silencer can be added to the system. To minimize intake system surface radiated noise, proper design, selection and mounting of intake system components are essential.
COOLING SYSTEM NOISE: Water cooled engines are typically cooled by using a radiator as a heat exchanger – with an axial flow fan is used to draw cooling air through the radiator. Air-cooled engines generally use a centrifugal fan in conjunction with shrouding to direct cooling air across the engine. Fan noise consists of both discrete frequency tones and broadband noise. The broadband components of fan noise are caused by the shedding of vortices from the rotating fan blades and by turbulence in the fans air stream.
Water Cooled Engines
A variety of design parameter affect at the sound-emission levels of axial-flow fans, but fan blade tip speed is the dominant factor. To minimize fan tip speed, while still providing sufficient engine cooling, the cooling system’s efficiency must be as high as possible. To maxmise cooling system efficiency in water-cooled engines, the following consideration should be made-
1. use water pump and radiator that have adequate capacities, furthermore, be sure that the radiator core has sufficient surface and air flow areas.
2. use a fan with proper aerodynamic blade design.
3. use a shroud to prevent recalculation of air from the high pressure side of the fan in the low pressure side. Clearance between the tips of the fan blades and the shrouding should be minimal.
4. reduce air flow resistance and turbulence in the system. This can be achieved through proper shroud design, proper spacing between the radiator, proper radiator core design.
Remedial Measures
1, Stopping it at the source

 Improving the engineering in many noisy objects has cut noise nearly by 30 decibels (i.e. snow mobiles)
 Government has set up regulations to manufacturers such as GM and Mack truck to reduce vibration in heavy gears, axles and transmissions.
 Reducing sound at the sources by an average of 10 decibel cuts soundness in half.

2. SHIELDING YOUR EARS
 Without doubt, plugging up your ears is the cheapest and easiest method of noise control.
 If you have to be around loud noise protecting yourself with earplugs is better than doing.
 Excessive exposure to loud noise and or exposure to a quick sound noise could cause serious damage to your ears.

Non-Destructive Testing

Non-Destructive Testing (NDT) is a wide group of analysis techniques used in science and industry to evaluate the properties of a material, component or system without causing damage. The terms Nondestructive examination (NDE), Nondestructive inspection (NDI), and Nondestructive evaluation (NDE) are also commonly used to describe this technology. Because NDT does not permanently alter the article being inspected, it is a highly-valuable technique that can save both money and time in product evaluation, troubleshooting, and research.

Non-destructive Testing is one part of the function of Quality Control and is Complementary to other long established methods. By definition non-destructive testing is the testing of materials, for surface or internal flaws or metallurgical condition, without interfering in any way with the integrity of the material or its suitability for service.

The technique can be applied on a sampling basis for individual investigation or may be used for 100% checking of material in a production quality control system. Whilst being a high technology concept, evolution of the equipment has made it robust enough for application in any industrial environment at any stage of manufacture - from steel making to site inspection of components already in service. A certain degree of skill is required to apply the techniques properly in order to obtain the maximum amount of information concerning the product, with consequent feed back to the production facility. Non-destructive Testing is not just a method for rejecting substandard material; it is also an assurance that the supposedly good is good. The technique uses a variety of principles; there is no single method around which a black box may be built to satisfy all requirements in all circumstances.
Radiography:
This technique is suitable for the detection of internal defects in ferrous and nonferrous metals and other materials. X-rays, generated electrically, and Gamma rays emitted from radio-active isotopes, are penetrating radiation which is differentially absorbed by the material through which it passes; the greater the thickness, the greater the absorption. Furthermore, the denser the material the greater the absorption. X and Gamma rays also have the property, like light, of partially converting silver halide crystals in a photographic film to metallic silver, in proportion to the intensity of the radiation reaching the film, and therefore forming a latent image. This can be developed and fixed in a similar way to normal photographic film. Material with internal voids is tested by placing the subject between the source of radiation and the film. The voids show as darkened areas, where more radiation has reached the film, on a clear background. The principles are the same for both X and Gamma radiography.

In X-radiography the penetrating power is determined by the number of volts applied to the X-Ray tube - in steel approximately 1000 volts per inch thickness is necessary. In Gamma radiography the isotope governs the penetrating power and is unalterable in each isotope. Thus Iridium 192 is used for 1/2" to 1" steel and Caesium 134 is used for 3/4" to 21/2" steel. In X-radiography the intensity, and therefore the exposure time, is governed by the amperage of the cathode in the tube. Exposure time is usually expressed in terms of milliampere minutes. With Gamma rays the intensity of the radiation is set at the time of supply of the isotope. The intensity of radiation from isotopes is measured in Becquerel’s and reduces over a period of time.
The time taken to decay to half the amount of curies is the half life and is characteristic of each isotope. For example, the half life of Iridium 192 is 74 days, and Caesium 134 is 2.1 years. The exposure factor is a product of the number of curies and time, usually expressed in curie hours. The time of exposure must be increased as the isotope decays - when the exposure period becomes uneconomical the isotope must be renewed. As the isotope is continuously emitting radiation it must be housed in a container of depleted uranium or similar dense shielding material, whilst not exposed to protect the environment and personnel.
Magnetic Particle Inspection:
This method is suitable for the detection of surface and near surface discontinuities in magnetic material, mainly ferrite steel and iron. An Illustration of the Principle of Magnetic Particle Inspection
The principle is to generate magnetic flux in the article to be examined, with the flux lines running along the surface at right angles to the suspected defect. Where the flux lines approach a discontinuity they will stay out in to the air at the mouth of the crack. The crack edge becomes magnetic attractive poles North and South. These have the power to attract finely divided particles of magnetic material such as iron fillings. Usually these particles are of an oxide of iron in the size range 20 to 30 microns, and are suspended in a liquid which provides mobility for the particles on the surface of the test piece, assisting their migration to the crack edges. However, in some instances they can be applied in a dry powder form. The particles can be red or black oxide, or they can be coated with a substance, which fluoresces brilliantly under ultra-violet illumination (black light). The object is to present as great a contrast as possible between the crack indication and the material background. The technique not only detects those defects which are not normally visible to the unaided eye, but also renders easily visible those defects which would otherwise require close scrutiny of the surface. There are many methods of generating magnetic flux in the test piece, the simplest one being the application of a permanent magnet to the surface, but this method cannot be controlled accurately because of indifferent surface contact and deterioration in magnetic strength. Modern equipments generate the magnetic field electrically either directly or indirectly.
Eddy Current and Electro-Magnetic Methods
The main applications of the eddy current technique are for the detection of surface or subsurface flaws, conductivity measurement and coating thickness measurement. The technique is sensitive to the material conductivity, permeability and dimensions of a product. Eddy currents can be produced in any electrically conducting material that is subjected to an alternating magnetic field (typically 10Hz to 10MHz). The alternating magnetic field is normally generated by passing an alternating current through a coil. The coil can have many shapes and can between 10 and 500 turns of wire.

The magnitude of the eddy currents generated in the product is dependent on conductivity, permeability and the set up geometry. Any change in the material or geometry can be detected by the excitation coil as a change in the coil impedance The most simple coil comprises a ferrite rod with several turns of wire wound at one end and which is positioned close to the surface of the product to be tested. When a crack, for example, occurs in the product surface the eddy currents must travel farther around the crack and this is detected by the impedance change
Reference
www.ndt-ed.org
www.google.com
www.wikipedia.com
www.InsightNDT.com.

Plastic Injection Molding

Abstract
Injection molded components are consistently designed to minimize the design and manufacturing information content of the enterprise system. The resulting designs, however, are extremely complex and frequently exhibit coupling between multiple qualities attributes. Axiomatic design principles were applied to the injection molding process to add control parameters that enable the spatial and dynamic decoupling of multiple quality attributes in the molded part. There are three major benefits of the process redesign effort. First, closed loop pressure control has enabled tight coupling between the mass and momentum equations. This tight coupling allows the direct input and controllability of the melt pressure. Second, the use of multiple melt actuators provides for the decoupling of melt pressures between different locations in the mold cavity. Such decoupling can then be used to maintain functional independence of multiple qualities attributes. Third, the heat equation has been decoupled from the mass and momentum equations. This allows the mold to be filled under isothermal conditions. Once the cavities are completely full and attain the desired packing pressure, then the cooling is allowed to progress.

Injection molding is the most commonly used manufacturing process for the fabrication of plastic parts. A wide variety of products are manufactured using injection molding, which vary greatly in their size, complexity, and application. The injection molding process requires the use of an injection molding machine, raw plastic material, and a mold. The plastic is melted in the injection molding machine and then injected into the mold, where it cools and solidifies into the final part. The steps in this process are described in greater detail in the next section.
Injection molding is used to produce thin-walled plastic parts for a wide variety of applications, one of the most common being plastic housings. Plastic housing is a thin-walled enclosure, often requiring many ribs and bosses on the interior. These housings are used in a variety of products including household appliances, consumer electronics, power tools, and as automotive dashboards. Other common thin-walled products include different types of open containers, such as buckets. Injection molding is also used to produce several everyday items such as toothbrushes or small plastic toys. Many medical devices, including valves and syringes, are manufactured using injection molding as well. 
Machinery & Equipment:
Injection molding machines consist of a material hopper, an injection ram or screw-type plunger, and a heating unit. They are also known as presses, they hold the molds in which the components are shaped. Presses are rated by tonnage, which expresses the amount of clamping force that the machine can exert. This force keeps the mold closed during the injection process. Tonnage can vary from less than 5 tons to 6000 tons, with the higher figures used in comparatively few manufacturing operations.
The total clamp force needed is determined by the projected area of the part being molded. This projected area is multiplied by a clamp force of from 2 to 8 tons for each square inch of the projected areas. As a rule of thumb, 4 or 5 tons/in 2 can be used for most products. If the plastic material is very stiff, it will require more injection pressure to fill the mold, thus more clamp tonnage to hold the mold closed. The required force can also be determined by the material used and the size of the part, larger parts require higher clamping force.
http://upload.wikimedia.org/wikipedia/commons/thumb/1/15/PlasticsInjectionMoulder-die.jpg/220px-PlasticsInjectionMoulder-die.jpg
Fig. Injection Molding Machine.
Injection molding machines have many components and are available in different configurations, including a horizontal configuration and a vertical configuration. However, regardless of their design, all injection molding machines utilize a power source, injection unit, mold assembly, and clamping unit to perform the four stages of the process cycle.
Injection Unit:
The injection unit is responsible for both heating and injecting the material into the mold. The first part of this unit is the hopper, a large container into which the raw plastic is poured. The hopper has an open bottom, which allows the material to feed into the barrel. The barrel contains the mechanism for heating and injecting the material into the mold. This mechanism is usually a ram injector or a reciprocating screw. A ram injector forces the material forward through a heated section with a ram or plunger that is usually hydraulically powered. Today, the more common technique is the use of a reciprocating screw. A reciprocating screw moves the material forward by both rotating and sliding axially, being powered by either a hydraulic or electric motor.

The material enters the grooves of the screw from the hopper and is advanced towards the mold as the screw rotates. While it is advanced, the material is melted by pressure, friction, and additional heaters that surround the reciprocating screw. The molten plastic is then injected very quickly into the mold through the nozzle at the end of the barrel by the buildup of pressure and the forward action of the screw. This increasing pressure allows the material to be packed and forcibly held in the mold. Once the material has solidified inside the mold, the screw can retract and fill with more material for the next shot.
Clamping Unit
Prior to the injection of the molten plastic into the mold, the two halves of the mold must first be securely closed by the clamping unit. When the mold is attached to the injection molding machine, each half is fixed to a large plate, called a platen. The front half of the mold, called the mold cavity, is mounted to a stationary platen and aligns with the nozzle of the injection unit. The rear half of the mold, called the mold core, is mounted to a movable platen, which slides along the tie bars. The hydraulically powered clamping motor actuates clamping bars that push the moveable platen towards the stationary platen and exert sufficient force to keep the mold securely closed while the material is injected and subsequently cools. After the required cooling time, the mold is then opened by the clamping motor. An ejection system, which is attached to the rear half of the mold, is actuated by the ejector bar and pushes the solidified part out of the open cavity.
Reference
• http://www.energyusernews.com/CDA/ArticleInformation/features/BNP__Features__Item/0,2584,66600,00.html
• www.plasticsone.com
• www.badgercolor.com
• http://www.mhi.co.jp
• www.gasassist.com
• www.plasticnews.com 

Pulse Detonation Engine

Rocket engines that work much like an automobile engine are being developed at NASA’s Marshall Space Flight Center in Huntsville, Ala. Pulse detonation rocket engines offer a lightweight, low-cost alternative for space transportation. Pulse detonation rocket engine technology is being developed for upper stages that boost satellites to higher orbits. The advanced propulsion technology could also be used for lunar and planetary Landers and excursion vehicles that require throttle control for gentle landings.

The engine operates on pulses, so controllers could dial in the frequency of the detonation in the "digital" engine to determine thrust. Pulse detonation rocket engines operate by injecting propellants into long cylinders that are open on one end and closed on the other. When gas fills a cylinder, an igniter—such as a spark plug—is activated. Fuel begins to burn and rapidly transitions to a detonation, or powered shock. The shock wave travels through the cylinder at 10 times the speed of sound, so combustion is completed before the gas has time to expand. The explosive pressure of the detonation pushes the exhaust out the open end of the cylinder, providing thrust to the vehicle.

A major advantage is that pulse detonation rocket engines boost the fuel and oxidizer to extremely high pressure without a turbo pump—an expensive part of conventional rocket engines. In a typical rocket engine, complex turbo pumps must push fuel and oxidizer into the engine chamber at an extremely high pressure of about 2,000 pounds per square inch or the fuel is blown back out.

The pulse mode of pulse detonation rocket engines allows the fuel to be injected at a low pressure of about 200 pounds per square inch. Marshall Engineers and industry partners United Technology Research Corp. of Tullahoma, Tenn. and Adroit Systems Inc. of Seattle have built small-scale pulse detonation rocket engines for ground testing. During about two years of laboratory testing, researchers have demonstrated that hydrogen and oxygen can be injected into a chamber and detonated more than 100 times per second. 
Pre-Compression and Detonation:
In the PDE the pre-compression is instead a result of interactions between the combustion and gas dynamic effects, i.e. the combustion is driving the shock wave, and the shock wave (through the increase in temperature across it) is necessary for the fast combustion to occur. In general, detonations are extremely complex phenomena, involving forward propagating as well as transversal shock waves, connected more or less tightly to the combustion complex during the propagation of the entity.
The biggest obstacles involved in the realization of an air breathing PDE are the initiation of the detonation and the high frequency by which the detonations have to be repeated. Of these two obstacles the initiation of the detonation is believed to be of a more fundamental character, since all physical events involved regarding the initiation are not thorough- ly understood. The detonation can be initiated in two ways; as a direct initiation where the detonation is initiated by a very powerful ignitor more or less immediately or as a Deflagration to Detonation Transition (DDT) where an ordinary flame (i.e. a deflagration) accelerates to a detonation in a much longer time span .
Typically, hundreds of joules are required to obtain a direct initiation of a detonation in a mixture of the most sensitive hydrocarbons and air, which prevents this method to be used in a PDE (if oxygen is used instead of air, these levels are drastically reduced). On the other hand, to ignite an ordinary flame requires reasonable amounts of energy, but the DDT requires lengths on the order of several meters to be completed, making also this method impractical to use in a PDE.
It is important to point out that there are additional difficulties when liquid fuels are used which generally make them substantially more difficult to detonate. A common method to circumvent these difficulties is to use a pre-detonator - a small tube or a fraction of the main chamber filled with a highly detonable mixture (typically the fuel and oxygen instead of air) - in which the detonation can be easily initiated.

The detonation from the pre-detonator is then supposed to be transmitted to the main chamber and initiate the detonation there. The extra component carried on board (e.g. oxygen) for use in the pre-detonator will lower the specific impulse of the engine, and it is essential to minimize the amount of this extra component.
Combustion Analysis:
While real gas effects are important considerations to the prediction of real PDE performance, it is instructive to examine thermodynamic cycle performance using perfect gas assumptions. Such an examination provides three benefits. First, the simplified relations provide an opportunity to understand the fundamental processes inherent in the production of thrust bythe PDE. Second, such an analysis provides the basis for evaluating the potential of the PDE relative to other cycles, most notably the Brayton cycle. Finally, a perfect gas analysis provides the 0framework for developing a thermodynamic cycle analysis for the prediction of realistic PDE performance.

The present work undertakes such a perfect gas analysis using a standard closed thermodynamic cycle. In the first sections, a thermodynamic cycle description is presented which allows prediction of PDE thrust performance. This cycle description is then modified to include the effects of inlet, combustor and nozzle efficiencies. The efinition of these efficiencies is based on standard component performance.
 
Any thermodynamic cycle analysis of the PDE must begin by examining the influence of detonative combustion relative to conventional deflagrative combustion. The classical approach to the detonative combustion analysis is to assume Chapman-Jouget detonation conditions after combustion.
The subsonic Chapman-Jouget solution represents the thermally choked ramjet. To insure consistent handling of the PDE and ramjet, this paper uses Rayleigh analysis for both cycles.
A comparison of the ideal gas Rayleigh process loss was made for deflagration and Chapman-Jouget detonation combustion, The comparison was made for a range of heat additions, represented here by the ratio of the increase in total temperature to the initial static temperature. Four different entrance Mach numbers were also considered. The figure of merit for the comparison is the ratio of the increase in entropy to specific heat at constant pressure. The results show that at the same heat addition and entrance Mach number, detonation is consistently a more efficient combustion process, as evidenced by the lower increase in entropy. This combustion process efficiency is one of the basic thermodynamic advantages of the PDE.

Virtual Manufacturing Systems

Abstract:
The term Virtual Manufacturing is now widespread in literature but several definitions are attached to these words. First we have to define the objects that are studied. Virtual manufacturing concepts originate from machining operations and evolve in this manufacturing area. However one can now find a lot of applications in different fields such as casting, forging, sheet metalworking and robotics (mechanisms). The general idea one can find behind most definitions is that “Virtual Manufacturing is nothing but manufacturing in the computer”. This short definition comprises two important notions: the process (manufacturing) and the environment (computer).

In [1, 2] VM is defined as “manufacture of virtual products defined as an aggregation of computer-based information that provide a representation of the properties and behaviours of an actualized product”.

Some researchers present VM with respect to virtual reality (VR). On one hand, in [3] VM is represented as a virtual world for manufacturing, on the other hand, one can consider virtual reality as a tool which offers visualization for VM [4] .

The most comprehensive definition has been proposed by the Institute for Systems Research, University of Maryland, and discussed in [5, 6] is “an integrated, synthetic manufacturing environment exercised to enhance all levels of decision and control”

A similar definition has been proposed: “Virtual Manufacturing is a system, in which the abstract prototypes of manufacturing objects, processes, activities, and principles evolve in a computer-based environment to enhance one or more attributes of the manufacturing process.”

One can also define VM focusing on available methods and tools that allow a continuous, experimental depiction of production processes and equipment using digital models. Areas that are concerned are (i) product and process design, (ii) process and production planning, (iii) machine tools, robots and manufacturing system and virtual reality applications in manufacturing.
The Scope of Virtual Manufacturing:
The scope of VM can be to define the product, processes and resources within cost, weight, investment, timing and quality constraints in the context of the plant in a collaborative environment. Three paradigms are proposed in [5]:

a) Design-centered VM: provides manufacturing information to the designer during the design phase. In this case VM is the use of manufacturing-based simulations to optimize the design of product and processes for a specific manufacturing goal (DFA, quality, flexibility,) or the use of simulations of processes to evaluate many production scenarios at many levels of fidelity and scope to inform design and production decisions.

b) Production-centered VM: uses the simulation capability to modelize manufacturing processes with the purpose of allowing inexpensive, fast evaluation of many processing alternatives. From this point of view VM is the production based converse of Integrated Product Process Development (IPPD) which optimizes manufacturing processes and adds analytical production simulation to other integration and analysis technologies to allow high confidence validation of new processes and paradigms.

c) Control-centered VM: is the addition of simulations to control models and actual processes allowing for seamless simulation for optimization during the actual production cycle.

Another vision is proposed by Marinov in [7]. The activities in manufacturing include design, material selection, planning, production, quality assurance, management, marketing, If the scope takes into account all these activities, we can consider this system as a Virtual Production System. A VM System includes only the part of the activities which leads to a change of the product attributes (geometrical or physical characteristics, mechanical properties,) and/or processes attributes (quality, cost, agility,). Then, the scope is viewed in two directions: horizontal scope along the manufacturing cycle, which involves two phases, design and production phases, and a vertical scope across the enterprise hierarchy. Within the manufacturing cycle, the design includes the part and process design and, the production phase includes part production and assembly.

We choose to define the objectives, scope and the domains concerned by the Virtual Manufacturing thanks to the 3D matrix represented in Fig. 2 which has been proposed by IWB, Munich.
The vertical planes represent the three main aspects of manufacturing today: Logistics, Productions and Assembly, which cover all aspects directly related to the manufacturing of industrial goods. The horizontal planes represent the different levels within the factory. At the lowest level (microscopic level), VM has to deal with unit operations, which include the behaviour and properties of material, the models of machine tool – cutting tool – workpiece-fixture system. These models are then encapsulated to become VM cells inheriting the characteristics of the lower level plus some extra characteristics from new objects such as a virtual robot. Finally, the macroscopic level (factory level) is derived from all relevant sub-systems. The last axis deals with the methods we can use to achieve VM systems.
Applications of VM
The attractive applications of VM include: analysis of the manufacturability of a part and a product; evaluating and validating the feasibility of the production and process plans; optimisation of the production process and the performance of the manufacturing system. Since a VM model is established based on real manufacturing facilities and processes, it does not only provide realistic information about the product and its manufacturing processes, but also allows

for the evaluation and the validation of them. Many iterations can be carried out to arrive at an optimal solution. The modelling and simulation technologies in VM enhance the production flexibility and reduce the ``Fixed costs'' since no physical conversion of materials to products is involved. Apart from these, VM can be used to reliably predict the business risks and this will support the management in decision making and strategic management of an enterprise.

Some typical applications of VM are as follows:
1. VM can be used in the evaluation of the feasibility of a product design, validation of a production plan, and optimisation of the product design and processes. These reduce the cost in product life cycle.

2. VM can be used to test and validate the accuracy of the product and process designs. For example, the outlook of a product design, dynamic characteristics analysis, checking for the tool path during machining process, NC program validation, checking for the collision problems in machining and assembly, etc.

3. With the use of VM on the Internet, it is possible to conduct training under a distributed virtual environment for the operators, technicians and management people on the use of manufacturing facilities. The costs of training and production can thus be reduced.

4. As a knowledge acquisition vehicle, VM can be used to acquire continuously the manufacturing know-how, traditional manufacturing processes, production data, etc. This can help to upgrade the level of intelligence of a manufacturing system.