Management of Renewable Energies and Environmental Protection, Part I

The purpose of this project is to present an overview of renewable energy sources, major technological developments and case studies, accompanied by applicable examples of the use of sources. Renewable energy is the energy that comes from natural resources: The wind, sunlight, rain, sea waves, tides, geothermal heat, regenerated naturally, automatically. Greenhouse gas emissions pose a serious threat to climate change, with potentially disastrous effects on humanity.

mediaimage
The use of Renewable Energy Sources (RES) together with improved Energy Efficiency (EE) can contribute to reducing energy consumption,Management of Renewable Energies and Environmental Protection, Part I Articles reducing greenhouse gas emissions and, as a consequence, preventing dangerous climate change. At least one-third of global energy must come from different renewable sources by 2050: The wind, solar, geothermal, hydroelectric, tidal, wave, biomass, etc. Oil and natural gas, classical sources of energy, have fluctuating developments on the international market. A second significant aspect is given by the increasingly limited nature of oil resources. It seems that this energy source will be exhausted in about 50 years from the consumption of oil reserves in exploitation or prospecting. “Green” energy is at the fingertips of both economic operators and individuals. In fact, an economic operator can use such a system for both own consumption and energy trading on the domestic energy market. The high cost of deploying these systems is generally depreciated in about 5-10 years, depending on the installed production capacity. The “sustainability” condition is met when projects based on renewable energy have a negative CO2 or at least neutral CO2 over the life cycle. Emissions of Greenhouse Gases (GHG) are one of the environmental criteria included in a sustainability analysis, but is not enough. The concept of sustainability must also include in the assessment various other aspects, such as environmental, cultural, health, but must also integrate economic aspects. Renewable energy generation in a sustainable way is a challenge that requires compliance with national and international regulations. Energy independence can be achieved: – Large scale (for communities); – small-scale (for individual houses, vacation homes or cabins without electrical connection).

Keywords: Environmental Protection, Renewable Energy, Sustainable Energy, The Wind, Sunlight, Rain, Sea Waves, Tides, Geothermal Heat, Regenerated Naturally.

Introduction
The purpose of this project is to present an overview of renewable energy sources, major technological developments and case studies, accompanied by applicable examples of the use of sources.

Renewable energy is the energy that comes from natural resources: The wind, sunlight, rain, sea waves, tides, geothermal heat, regenerated naturally, automatically.

Greenhouse gas emissions pose a serious threat to climate change, with potentially disastrous effects on humanity. The use of Renewable Energy Sources (RES) together with improved Energy Efficiency (EE) can contribute to reducing energy consumption, reducing greenhouse gas emissions and, as a consequence, preventing dangerous climate change.

At least one-third of global energy must come from different renewable sources by 2050: The wind, solar, geothermal, hydroelectric, tidal, wave, biomass, etc.

Oil and natural gas, classical sources of energy, have fluctuating developments on the international market. A second significant aspect is given by the increasingly limited nature of oil resources. It seems that this energy source will be exhausted in about 50 years from the consumption of oil reserves in exploitation or prospecting.

“Green” energy is at the fingertips of both economic operators and individuals.

In fact, an economic operator can use such a system for both own consumption and energy trading on the domestic energy market. The high cost of deploying these systems is generally depreciated in about 5-10 years, depending on the installed production capacity.

The “sustainability” condition is met when projects based on renewable energy have a negative CO2 or at least neutral CO2 over the life cycle.

Emissions of Greenhouse Gases (GHG) are one of the environmental criteria included in a sustainability analysis, but is not enough. The concept of sustainability must also include in the assessment various other aspects, such as environmental, cultural, health, but must also integrate economic aspects.

Renewable energy generation in a sustainable way is a challenge that requires compliance with national and international regulations.

Energy independence can be achieved:

Large scale (for communities)
Small-scale (for individual houses, vacation homes or cabins without electrical connection)
Today, the renewable energy has gained an avant-garde and a great development also thanks to governments and international organizations that have finally begun to understand its imperative necessity for humanity, to avoid crises and wars, to maintain a modern life (we can’t go back to caves).

Materials and Methods
Solar Energy
Solar energy means the energy that is directly produced by the transfer of light energy radiated by the Sun into other forms of energy. This can be used to generate electricity or to heat the air and water. Although solar energy is renewable and easy to produce, the main problem is that the sun does not provide constant energy over a day, depending on the day-night alternation, weather conditions, season.

Solar Panels generate electricity approx. 9h/day (the calculation is minimal, the winter is 9 h), feeding the consumers and charging the batteries at the same time.

Solar installations are of two types: Thermal and photovoltaic.

Photovoltaics produce electricity directly, thermal ones help save 75% of other fuels (wood, gas) per year. A house that has both solar installations (with photovoltaic and vacuum thermal panels) can be considered “energy independence” (because the energy accumulated in the day is then sent to the grid and used as needed).

The use of solar radiation for the production of electricity can be done by several methods:

The use of photovoltaic modules – by capturing the energy of the photons coming from the sun and storing it in free electrons, thereby generating an electric current, solar photovoltaic panels generating electricity
The use of solar towers
Using Parabolic Concentrators – This type of concentrator consists of a gutter-shaped parabolic mirror that concentrates solar radiation on a pipe. A working fluid is circulating in the duct which is generally an oil that takes up the heat to give it water to produce the steam that drives the turbine of an electric generator. The concentrator requires adjusting the posture position of the sun in the apparent daytime displacement
Using the Dish-Stirling system
Solar installations work even when the sky is dark. They are also resistant to hail (in the case of the best panels).

Solar-thermal systems are mainly made with flat-bottomed solar collectors or vacuum tubes, especially for smaller solar radiation in Europe. In the energy potential assessments, applications concerning water heating or enclosures/swimming pools (domestic hot water, heating, etc.) were considered.

Locations for solar-thermal applications (thermal energy).

In this case, any available space can be used if:

Allows the location of solar thermal collectors
Preferential orientation to the South and inclination according to location latitude
This is the case for roofs of houses/blocks, adjacent buildings (covered parking lots, etc.) or land on which solar-thermal collectors can be located (Aversa et al., 2017 a-d; 2016 a-d; Petrescu et al., 2016 a-b; Mirsayar et al, 2017; Blue Planet; World Tree, From Wikipedia; Giovanni et al., 2012).

For the solar photovoltaic potential, both photovoltaic power grid applications and autonomous (non-grid) applications for isolated consumers were considered.

Solar energy can be used very easily – with a photovoltaic system. This type of system transforms the sunlight into electricity throughout the year, with the point that only high-quality photovoltaic systems that produce electricity over a long time are profitable. The system also allows other energy sources to be coupled with solar energy such as wind energy produced by a turbine. Obviously, besides the converter, it is also necessary to have a battery that is strong enough to retain as much energy as possible during the night, or when the consumption amount is very low and to release it when necessary. A system for producing, distributing and maintaining renewable energies for a house, cottage, motel, hospital, even located in isolated places, where the power grid does not reach, is presented in the figure 3. If the wind does not blow in a long period and the sky is not sunny, it is necessary to have an electric generator inserted into the system.

Wind Potential
The winds are due to the fact that the Earth’s equatorial regions receive more solar radiation than the polar regions, thus creating a large number of convection currents in the atmosphere. According to meteorological assessments, about 1% of the solar input is converted to wind energy, while 1% of the daily wind energy contribution is roughly equivalent to the world’s daily energy consumption. This means that global wind resources are in large, widespread quantities. More detailed assessments are needed to quantify resources in certain areas.

Wind energy production began very early centuries ago, with sailboats, windmills and grain mowers. It was only at the beginning of this century that high-speed wind turbines were developed to generate electricity. The term wind turbine is widely used today for a rotating blade machine that converts the kinetic energy of the wind into useful energy. Currently there are two categories of base wind turbines: Wind turbine Wind Turbines (HAWT) and Vertical Wind Turbines (VAWT), depending on the axis orientation of the rotor.

Wind power applications involve electricity generation, with wind turbines operating in parallel to network or utility systems, in remote locations, in parallel with fossil-fueled engines (hybrid systems). The gain resulting from the wind energy exploitation consists both in the low consumption of fossil fuels as well as the reduction of the overall costs of generating electricity. Electric utilities have the flexibility to accept a contribution of about 20% of wind power systems. Combined Eolian-diesel systems can offer fuel savings of over 50%.

Wind power generation is a fairly new industry (20 years ago in Europe, wind turbines had not yet reached commercial maturity). In some countries, wind energy is already competing with fossil fuel and nuclear energy, even without considering the benefits of wind energy for the environment.

When estimating the cost of electricity produced in conventional power plants, their influence on the environment (acid rain, effects of climate change, etc.) is usually not taken into account. Wind energy production continues to improve by reducing costs and increasing efficiency.

The cost of wind energy is between 5-8 cents per kWh and is expected to fall to 4 cents per kWh in the near future. Maintenance of wind energy projects is simple and inexpensive. Amounts of money paid to farmers for land renting provide additional income to rural communities. Local companies that carry out the construction of wind farms provide short-term local jobs while long-term jobs are created for maintenance work. Wind energy is a rapidly growing industry in the world.

An indispensable requirement for the use of wind to produce energy is a constant flow of strong wind. The maximum power Wind Turbines (WTS) are designed to generate is called “rated power” and the wind speed at which nominal power is reached is “wind speed at rated power”. This is chosen to suit the wind speed in the field and is generally about 1.5 times the average wind speed in the ground. One way to classify wind speed is the Beaufort scale that provides a description of the wind characteristics. It was originally designed for sailors and described the state of the sea, but was later modified to include wind effects in the field.

The power produced by the wind turbine increases from zero, below the starting wind speed (usually around 4 m/s, but again, depending on the location) to the maximum at wind speed at rated power. Above the wind speed at rated power, the wind turbine continues to produce the same rated power, but at lower output until it stops, when the wind speed becomes dangerously high, ie over 25 to 30 m/s (vigorous storm). This is the shut off speed of the wind turbine. Exact specifications for identifying the energy produced by a wind turbine depend on the wind speed distribution during the year in the field.

Air currents can be used to train wind turbines. Modern wind turbines produce a power of between 600 KW and 5 MW, the most used being the 1.5-3 MW output power, being more simple and constructive and more suitable for commercial use. The output power of a typical wind turbine is dependent on the wind speed at the third power so that wind speed increases, the power generated by the turbine increases with the wind speed cube, the increase being spectacular. The world’s technical potential for wind power can provide five times more energy than it is consumed now.

In the strategy for capitalizing on renewable energy sources, the declared wind potential is 14,000 MW (installed power), which can provide an amount of energy of about 23,000 GWh/year. These values represent an estimate of the theoretical potential and must be reproduced in correlation with the possibilities of technical and economic exploitation. Starting from the theoretical wind potential, what interests the energy development forecasts is the potential for practical use in wind applications, which is much smaller than the theoretical potential, depending on the possibilities of land use and the conditions on the energy market. That is why the economically profitable wind potential can be appreciated only in the medium term, based on the technological and economic data known today and considered as valid in the medium term.

Under ideal conditions, the theoretical maximum of cp is 16/27 = 0.593 (known as the Betz limit) or, in other words, a wind turbine can theoretically extract 59.3% of the airflow energy. Under real conditions, the power factor does not reach more than 50%, as it includes all wind turbine wind turbine losses. In most of today’s technical publications, the cp value covers all losses and represents the product cp * h. The power output and the extraction potential differ depending on the power coefficient and the turbine efficiency.

If cp reaches the theoretical maximum, the wind speed immediately behind the rotor – v2 is only 1/3 of the speed in front of the rotor v1. Therefore wind turbines located in a wind farm produce less energy as a result of the reduction in wind speed caused by the wind turbines in front of them. Increasing the distance between wind turbines can reduce energy loss as wind flow will accelerate again. A correctly designed wind farm may therefore have less than 10% losses due to mutual interference effects.

Average annual power will vary from land to land. In high wind speeds, more energy will be obtained. This highlights the importance of strong winds and hence the implications of the wind climate on economic issues related to wind energy production.

Blasts are responsible for mixing air and their action can be considered in a similar way to molecular diffusion. As the vortex passes through the measuring point, the wind speed takes the value of that whirlpool for a period of time proportional to the magnitude of the whirlpool; this is a “gust”. In most cases, load variation is not significant. However, if the vortex scale is of the same magnitude as the scale of a component of the turbine, then the variation in load may affect the overall component. A gust of 3 sec corresponds to a whirl of about 20 m (e.g., similar to the length of a rotor blade), while a 15 sec burst corresponds to a 50 m swirl.

To calculate the maximum possible load for a turbine or its components over the lifetime of the turbine, the highest burst value is used for a relevant period of time. This is formulated as the maximum wind speed and gust speed over a 50-year period. Of course, wind speed can be exceeded during this period, the sizing reserve will allow for some overtaking. Calculation of stresses is particularly important for flexible structures, such as turbines, which are more susceptible to wind-induced damage than rigid structures such as buildings.

A wind turbine can be placed almost anywhere in a sufficiently open terrain. Nevertheless, a wind farm is a commercial project and therefore it is necessary to try to optimize its profitability. This is important not only for the profitability during the lifetime of the exploitation but also for the mobilization of capital in the initial phase of project development. For economical planning of investments in wind energy, it is necessary to know as safely as possible the prevailing wind conditions in the area of interest.

Due to lack of time and financial reasons, long-term measurement periods are often avoided. As a substitute, mathematical methods can be used to predict wind speeds at each location. Wind conditions and energy production resulting from the calculation can serve as the basis for economic calculations. In addition, simulation of wind conditions can be used to correlate wind measurements at a particular location with wind conditions in neighboring locations in order to determine the wind regime for a whole area.

Since wind speed can vary significantly over short distances, for example several hundred m, wind turbine location assessment procedures generally take into account all regional parameters that are likely to influence wind conditions.

Such parameters are:

Obstacles in the immediate vicinity
Topography of the environment in the measure region, which is characterized by vegetation, land use and buildings (description of the roughness of the soil)
Horoscopes, such as hills, may cause airflow acceleration or deceleration effects
For the calculation of the annual average power density in the field, a more accurate estimate of the average annual wind speed is required. Then, information on the wind speed distribution over time is needed. To obtain these trusted data, data sets that cover several years are needed, but usually these data are estimated using appropriate models from shorter-date data sets. After that, it is possible to determine the potential energy produced in relation to the performance of the wind turbine.

The most widespread procedure for long-term prediction of wind speed and energy efficiency in a land is the WAsP Wind Atlas.

The model quantifies the wind potential at different heights of the rotor shaft above ground for different locations, taking into account the distribution of wind speed (in time and space) at meteorological stations (measurement points).

The reference station could be up to 50 km away from the site. The projected energy output for this location can be calculated in relation to the power curve associated with the wind turbine (wind power). A key element of the WASP model is that it uses polar coordinates for the origin of the location of interest.

The WAsP incorporates both physical atmospheric models and statistical descriptions of the wind climate.

The physical models used include:

The similarity in the surface layer – considering the logarithmic law
The Geostrophic Wind Law
Stability corrections-to allow for variation from neutral stability
Change of roughness-to allow changes in land use throughout the area
Shelter model-modeling the effect of an obstacle on wind flow
Orographic model-modeling the effect of accelerating the wind speed in the field
The wind regime is described statistically by a Weibull distribution derived from the reference data. The Weibull distribution is designed to best fit the high wind speeds.

Depending on the complexity of the examined regions, different procedures are used to determine the wind conditions. In addition to the WAsP model mentioned above, there are other procedures such as mesoscales models.

Such measurements, usually performed over a period of one year, may be related to neighboring areas or may be extrapolated to the height of the rotor axis of certain types of turbines using the flow simulation described above.

Evaluating wind resources at a location ideally asks for data series for as long as possible at the location of the turbines. In addition, it is useful to understand the turbulence in the field and the rotor axis for the design of wind turbines. To do this, a quick time sample and spatial distribution of measurement points is required. In practice, time and expenses often exclude such a thorough investigation. Imitations are provided in the section on meteorology and wind structure.

Wind velocity measurements are the most critical measurements for wind resource valuation, performance determination and energy production. In economic terms, uncertainties are transformed directly into financial risk. There is no other branch in which the uncertainty of the measurements is as important as in wind energy. Due to the lack of experience, a lot of wind speed measurements have inaccurately high uncertainties, as best practices in the anemometer selection standards, anemometer calibration, anemometer fitting and measurement field selection have not been applied.

Investigations have shown that certain anemometers are highly susceptible to vertical air flow, which, under real conditions, even appear in open ground due to turbulence. In complex terrain these effects are of great importance and lead to over or underestimation of real wind conditions. Only a few types of anemometers can avoid these effects.

A representative positioning of the measuring point within the wind farm shall be chosen. For large power plants in complex terrain, 2 or 3 representative positions should be chosen for the installation of the pillar. At least one of the measurements must be made at the height of the rotor shaft of the future turbine to be installed, since extrapolation from a smaller height to the height of the rotor shaft gives rise to additional uncertainties. If one of the measuring posts is positioned close to the wind farm area, it can be used as the reference wind speed measuring pole during the boiler operation and for determining the wind energy performance by sectors.

Measurement of wind speed and direction are obviously necessary, but also other parameters, particularly air pressure and temperature. The equipment used for these measurements must be robust and safe, since it will generally be left unattended for long periods of time.

Average wind speed is usually collected using cup anemometers because they are safe and cheaper. These anemometers often have better response characteristics than those used at weather observation centers. Wind direction is measured with a girue. Giruetes are usually wound potentiometers. Giruge will be affected by the shade of the pillar and is often oriented so that the pillar is in the least likely wind direction. If data about the vertical flow of air is required, three-dimensional data is useful. These are obtained if lesser robust propeller anemometers are used, or sonic anemometers, which are more expensive.

These anemometers indicate information about both speed and wind direction. The data should be taken at a high frequency, possibly 20 Hz.

It is important that data transmission and storage is secure. For this purpose, the logger must be carefully isolated from atmospheric conditions, especially rain. Many experiments have suffered significant data loss due to various problems, including water infiltration or loss of electricity. The most promising locations for wind farms are usually in hostile environments, but a host of trusted data loggers are available on the market.

It is possible to collect data remotely and download data via a telephone line. The advantage is that data can be monitored on a regular basis and any other problems that occur with the tools can be resolved quickly. For the development of a wind energy project it is essential to carefully plan the data collection step.

Daily weather information is usually available free of charge from weather services. However, for statistical data and consultancy services fees are charged.

In South Europe, the wind regime is dominated by seasonal winds. Winter cold weather is associated with the northern and northest winds. These variations can be seen in station records for wind speed and temperature.

Certain studies suggest that a minimum of 8 months of data is required for the adequate estimation of wind resources. Other researchers have suggested that winter wind is the most important because it coincides with peak demand for electricity. The data can then be sorted into ranges or “boxes” for wind speed or wind direction, either as a whole. The number of measurements in each box is then counted and the sorted data is plotted as a percentage of the total number of readings to indicate the frequency distribution.

From these data it is possible to calculate the average wind speed and wind speed most likely. It is possible to obtain the distribution of the wind power density (proportional to the cubic wind speed). Data may also be presented as the probability of a higher wind velocity than another given value, usually zero, u> 0. These data can be represented by two parameters from the Weibull distribution, the k and C parameters resulting from the use of certain techniques such as the moment method, the least squares method and many others. The two parameters of the Weibull distribution match for many wind data with acceptable accuracy.

The data collected are representative, for example, that the year is not particularly windy or calm. To be sure, data is needed for about 10 years. Obviously this is not practical for a location. However, it is possible to compare the wind data from the location with those of a nearby weather station and apply a MCP-type methodology to increase the data set actually measured at 10 years.

There are a number of available MCP methods, such as:

Calculate the Weibull parameters from the location of interest and the reference location and correlate them over the measurement period and then apply the correction for the rest of the reference data
Calculation of the correction factor (coefficient) for the wind speed between the location of interest and the reference point, during the measurements and on each step of the wind direction
Correlate measured data with reference data by determining a continuous function between the two for all data over the measurement period and applying it for the rest of the reference data
Once the wind distribution probability density is established, the power curve of a turbine can be correlated with wind data to determine the turbine power density density. The data can of course apply to different types and configurations of turbines for optimizing results.

The annual energy output of a wind turbine is the most important economic factor. Uncertainties in determining the annual speed and power curve contribute to the overall uncertainty of predicted annual energy production and lead to a high financial risk.

Annual energy production can be estimated by the following two methods:

Wind velocity histogram and power curve
Theoretical wind distribution and power curve

Posted in Uncategorized | Comments Off on Management of Renewable Energies and Environmental Protection, Part I

Management of Renewable Energies and Environmental Protection, Part III

The purpose of this project is to present an overview of renewable energy sources, major technological developments and case studies, accompanied by applicable examples of the use of sources. Renewable energy is the energy that comes from natural resources: The wind, sunlight, rain, sea waves, tides, geothermal heat, regenerated naturally, automatically. Greenhouse gas emissions pose a serious threat to climate change, with potentially disastrous effects on humanity.

mediaimage
The use of Renewable Energy Sources (RES) together with improved Energy Efficiency (EE) can contribute to reducing energy consumption,Management of Renewable Energies and Environmental Protection, Part III Articles reducing greenhouse gas emissions and, as a consequence, preventing dangerous climate change. At least one-third of global energy must come from different renewable sources by 2050: The wind, solar, geothermal, hydroelectric, tidal, wave, biomass, etc. Oil and natural gas, classical sources of energy, have fluctuating developments on the international market. A second significant aspect is given by the increasingly limited nature of oil resources. It seems that this energy source will be exhausted in about 50 years from the consumption of oil reserves in exploitation or prospecting. “Green” energy is at the fingertips of both economic operators and individuals. In fact, an economic operator can use such a system for both own consumption and energy trading on the domestic energy market. The high cost of deploying these systems is generally depreciated in about 5-10 years, depending on the installed production capacity. The “sustainability” condition is met when projects based on renewable energy have a negative CO2 or at least neutral CO2 over the life cycle. Emissions of Greenhouse Gases (GHG) are one of the environmental criteria included in a sustainability analysis, but is not enough. The concept of sustainability must also include in the assessment various other aspects, such as environmental, cultural, health, but must also integrate economic aspects. Renewable energy generation in a sustainable way is a challenge that requires compliance with national and international regulations. Energy independence can be achieved: – Large scale (for communities); – small-scale (for individual houses, vacation homes or cabins without electrical connection).

Introduction
The purpose of this project is to present an overview of renewable energy sources, major technological developments and case studies, accompanied by applicable examples of the use of sources.

Renewable energy is the energy that comes from natural resources: The wind, sunlight, rain, sea waves, tides, geothermal heat, regenerated naturally, automatically.

Greenhouse gas emissions pose a serious threat to climate change, with potentially disastrous effects on humanity. The use of Renewable Energy Sources (RES) together with improved Energy Efficiency (EE) can contribute to reducing energy consumption, reducing greenhouse gas emissions and, as a consequence, preventing dangerous climate change.

At least one-third of global energy must come from different renewable sources by 2050: The wind, solar, geothermal, hydroelectric, tidal, wave, biomass, etc.

Oil and natural gas, classical sources of energy, have fluctuating developments on the international market. A second significant aspect is given by the increasingly limited nature of oil resources. It seems that this energy source will be exhausted in about 50 years from the consumption of oil reserves in exploitation or prospecting.

“Green” energy is at the fingertips of both economic operators and individuals.

In fact, an economic operator can use such a system for both own consumption and energy trading on the domestic energy market. The high cost of deploying these systems is generally depreciated in about 5-10 years, depending on the installed production capacity.

The “sustainability” condition is met when projects based on renewable energy have a negative CO2 or at least neutral CO2 over the life cycle.

Emissions of Greenhouse Gases (GHG) are one of the environmental criteria included in a sustainability analysis, but is not enough. The concept of sustainability must also include in the assessment various other aspects, such as environmental, cultural, health, but must also integrate economic aspects.

Renewable energy generation in a sustainable way is a challenge that requires compliance with national and international regulations.

Energy independence can be achieved:

Large scale (for communities)
Small-scale (for individual houses, vacation homes or cabins without electrical connection)
Today, the renewable energy has gained an avant-garde and a great development also thanks to governments and international organizations that have finally begun to understand its imperative necessity for humanity, to avoid crises and wars, to maintain a modern life (we can’t go back to caves).

Materials and Methods
The Geothermal Energy Potential
Geothermal energy is defined as the natural heat coming from within the Earth, captured for electricity, space heating or industrial steam. It is present anywhere beneath the earth’s crust, although the highest temperature and therefore the most desirable resource, is concentrated in regions with active or young geologically active volcanoes.

The geothermal resource is clean, renewable, because the heat emanating from the Earth’s interior is inexhaustible. The geothermal energy source is available 24 h a day, 365 days a year. By comparison, wind and solar energy sources are dependent on a number of factors, including daily and seasonal fluctuations and climate variations. For this reason, the energy produced from geothermal sources is, once captured, more secure than many other forms of electricity. Heat that continually springs from within the Earth is estimated to be equivalent to 42 million megawatts (Stacey and Loper, 1988). One megawatt can supply the energy needs of 1000 homes.

Geothermal energy originates from the thermal waters, which in turn extract their heat from the volcanic magma from the depths of the earth’s crust. The Earth’s thermal energy is therefore very large and is virtually inexhaustible, but it is very dispersed, very rarely concentrated and often too deep to be exploited industrially. Until now, the use of this energy has been limited to areas where geological conditions allow a transport medium (liquid or gaseous water) to “transfer” heat from hotspots from the depth to the surface, thus giving rise to geothermal resources.

The environmental impact of the use of geothermal energy is rather small and controllable. In fact, geothermal energy produces minimal atmospheric emissions. Emissions of nitrogen oxide, hydrogen sulphide, sulfur dioxide, ammonia, methane, dust and carbon dioxide emissions are extremely small, especially when compared to emissions from fossil fuels.

However, both water and condensed steam from geothermal power plants contain different chemical elements, including arsenic, mercury, lead, zinc, boron and sulfur, the toxicity of which obviously depends on their concentration. However, most of these elements remain in solution, in water that is re-injected into the same tank from which fermented water or steam was extracted. The most important parameter in the use of this type of energy is the temperature of the geothermal fluid, which determines the type of geothermal energy application. It can be used for heating or to generate electricity.

Going from the surface of the earth to the depth, it is noticed that the temperature increases progressively with the depth, with 3°C on average for every 100 m (30°C/km). It is called the geothermal gradient. For example, if the temperature after the first few meters below ground level, which on average corresponds to the average annual outdoor air temperature, is 15°C then it can reasonably be assumed that the first temperature will be 65-75°C at 2000 m Depth, 90-105°C at 3000 m and so on for the next few thousand meters.

Regions of interest for geothermal energy applications are those where the geothermal gradient is higher than normal. In some areas, either due to the volcanic activity of a recent geological age, or due to the cracked cracks of hot water at depths, the geological gradient is significantly higher than the average, so temperatures of 250-350°C are recorded at depths of 2000-4000 m.

A geothermal system consists of several main elements: a heat source, a reservoir, a carrier fluid that provides heat transport, a recharge area and a rock to seal the aquifer. The heat source may be a very high magmatic intrusion (> 600°C) that has reached relatively low depths (5-10 km) or, in some low temperature systems, the normal Earth’s temperature, which, as explained earlier, increases with the depth.

The tank is a volume of permeable rocks from which the carrier fluid (water or steam) extracts heat. The reservoir is generally covered by either impermeable layers or rocks whose low permeability is due to the self-sealing phenomenon consisting in the deposition of minerals in the pores of the rocks. The tank is connected to a surface recharge area through which the meteoric waters can replace the fluids leaving the tank through springs or by extraction to boreholes. The geothermal fluid is water, in most cases meteoric, liquid or gaseous, depending on temperature and pressure. Water often carries along with chemicals and gases such as CO2, H2S, etc. The mechanism underlying geothermal systems is generally governed by fluid convection. Convection occurs due to heating and thermal expansion of fluids in a gravitational field. The low density heated flame tends to rise and be replaced by a cooler, high density fluid coming from the edge of the system. Convection, by its nature, tends to increase the temperature at the top of the system, while the bottom temperature decreases. Frequently, a distinction is made between geothermal systems dominated by water and vapor-dominated systems. In water-dominated systems, liquid water is the continuous fluid phase controlling the pressure. Vapors may be present, generally as discrete bubbles. These geothermal systems, whose temperature may vary from 225°C, are the most widespread in the world. Depending on the temperature and pressure conditions, they can produce hot water, water-steam mixtures, wet steam, or, in some cases, dry steam. In vapor-dominated systems, liquid and vapor co-exist in the reservoir with continuous steam controlling the pressure. Geothermal systems of this type, of which the best known are Larderello in Italy and The Geysers in California, are quite rare and are high-temperature systems.

Generating electricity is the most important use of high pressure geothermal resources (> 150°C). Medium and low temperature resources are suitable for various applications. The classic Lindal Diagram (1973) shows the possible uses of geothermal fluids at different temperatures. Fluids at temperatures below 20°C are rarely used under very particular conditions, or in heat pump applications (DiPippo, 2004).

In the case of temperatures below 90°C, geothermal waters can be used directly rather than for conversion to electricity.

The most common form of use is for space heating, agricultural applications, aquaculture and some industrial uses.

When the water temperature is below 40°C, heat pumps are used to heat or cool the spaces. If groundwater is not available then heat pumps can be combined with heat exchangers with the ground.

A heat pump is a thermal machine which allows the extraction of heat from the basement and the aquifer at low depths (tens or hundreds of meters) at low temperatures and transferring it at higher temperatures to the medium to be heated. The advantage of heat pumps is related to the fact that for each unit of electricity consumed, approximately three units of heat in the form of heat, with the contribution of geothermal water, are obtained.

In the case of cooling, the heat is extracted from space and dissipated in the Earth; In the case of heating, the heat is extracted from the Earth and pumped into space.

A heat pump is governed by the same limitations of the second principle of thermodynamics (any energy transformation implies a dissipation of a heat-treated part that can no longer be used) like any other thermal and that maximum efficiency can be calculated from the Carnot cycle. Heat pumps are normally characterized by a performance coefficient that represents the ratio of its heating power to the electrical power absorbed by the grid.

High enthalpy geothermal energy is most commonly used to generate electricity. The typical geothermal system used to produce electricity has to produce about 10 kg of steam to produce a unit (kWh) of electricity. The production of large amounts of electricity in the order of hundreds of megawatts requires the production of large volumes of fluid. Thus, one aspect of geothermal systems is that it must contain large amounts of high temperature fluid or a tank that can be recharged with fluids that are heated as contact with the rocks.

The three basic types of power generating installations are “dry” and “flash” central stations, where the hot water pressure (usually over) is low. The production of electricity in each type of installation depends on the temperatures and pressures of the tank and each type produces different environmental impacts.

The most common type of power plant today is the “flash” with water cooling system where a mixture of water and steam is produced by the spring. The steam is separated into a surface vessel and led to the turbine and the turbine trains a generator.

In a dry installation, the steam comes directly from the geothermal tank to the turbine that drives the generator and no separation is required because the source produces only steam.

Recent advances in geothermal technologies have made it possible to produce electricity in economically advantageous conditions and geothermal low temperature resources of 100-150°C. Known as “binary” geothermal plants, these plants reduce emissions due to geothermal energy almost to zero. In the binary process, geothermal water heats another liquid, such as isobutane (most often n-pentane), which boils at a lower temperature than water and has high vapor pressure at low temperatures compared to steam. The two liquids are kept completely separated by using a heat exchanger to effect the transfer of thermal energy from geothermal water to the working fluid. The second fluid passes, vaporizes and turns into gaseous vapors and the force of expanding vapors drives the turbines that train the generators.

If the geothermal plant uses air cooling, geothermal fluids never make contact with the atmosphere before being pumped back into the underground geothermal reservoir. Developed in the 1980s, this technology is already in geothermal power stations in the world. The ability to use low-temperature resources increases the number of geothermal tanks that can be used to produce energy.

Binary geothermal power stations, along with flash systems, produce almost zero emissions. In the case of direct use of thermal energy from geothermal hot water, the impact on the environment is negligible and can be easily reduced by adopting closed cycle systems, with the final extraction and re-injection of the fluid in the same geothermal reservoir.

The economic aspect of the use of hot water is still limiting for a wider spread in the energy sector. In fact, the economic benefit derives from its prolonged use over many years with low operating costs, although the initial investment may be considerable.

Identifying a geothermal reservoir is a complex activity consisting of different phases, starting from exploring the surface of a given area. This consists of the preliminary assessment of current geothermal events (hot water springs, steam jets, geysers, etc.), followed by geological, geochemical, geophysical and drilling exploration wells (several hundred meters deep) Measure the temperature (geothermal gradient) and evaluate the earth’s heat flux. The interpretation of the collected data will suggest the location where deepwater exploration is to be carried out by well drilling (even at depths above 4000 m) that will confirm the existence of geothermal fluids. In the case of positive results, the geothermal field that has been identified will be exploited by drilling a sufficient number of wells to produce geothermal fluid (hot water or steam).

The largest geothermal power plant in the world is in California – “Geysers”, with an installed power of 750 MW.

Geological and hydrogeological studies have an important role in all phases of geothermal research. They provide basic information for interpreting the data obtained with other exploration methods, and ultimately for the realization of a realistic model of the geothermal system and the assessment of the potential of the resource.

Geothermal areas should be further analyzed using geophysical techniques (gravimetry, magnetic and electrical tests, hot water chemical analysis, etc.) to locate specific reservoirs, the source of geothermal fluid. Geophysical analysis aims at indirectly determining physical parameters of depth geological formations from the surface or at depths close to the surface.

These physical parameters include:

Temperature (temperature measurement)
Electrical conductivity (electrical and electromagnetic methods)
The propagation velocity of elastic waves (seismic studies)
Density (gravity analysis)
Magnetic susceptibility (magnetic methods)
Geothermal exploration is done through a sequence of several steps:

Study on thermal conditions by collecting heat flow and map information
Study on hydro-geological maps for assessing the distribution of groundwater resources
Drilling wells for fluid extraction
Only after the surface exploration has shown that there is a workable potential, one can proceed to drill wells.

Posted in Uncategorized | Comments Off on Management of Renewable Energies and Environmental Protection, Part III

NASA Started a Propeller set on Board Voyager 1 After 37 Years of Break

If you try to start a car that has been in the garage for decades, you expect the engine not to respond. But a set of propellers onboard the NASA Voyager 1 spacecraft was launched on Wednesday, November 29, 37 years after its last use without any problems. Voyager 1 is the only man-made object that has arrived in interstellar space, being also the space probe created by NASA, which travels at the highest speed and is at the highest distance from Terra.

mediaimage
The probe flies for 40 years and can change its position to keep its antenna pointing to the Terra using some small propellers operating in very short halves,NASA Started a Propeller set on Board Voyager 1 After 37 Years of Break Articles in the order of milliseconds. NASA’s Voyager team has been able to launch a set of back-up propellants that had not been in use since 1980. The test succeeds in extending Voyager 1’s life to a minimum of 2-3 years. In 2014, NASA engineers noticed that Voyager’s propellers used to change direction degraded. Over time, propellers end up working longer than normal to get the same effect on the direction of the probe. NASA experts have designed several working scenarios to solve the problem and concluded that it is best to use a series of back-up engines to control the probe’s direction. These propellants had not been used for 37 years. NASA has been forced to search for decades old archives and use an obsolete programming language that no one uses to compile commands that have been transmitted by radio waves to the small computer on board to Voyager 1. The probe is more than 20 billion km from Terra. In the early years of the mission, Voyager 1 passed past Jupiter, Saturn and some of the satellites of these planets. In order to maintain the correct distance and orientation of on-board instruments, engineers used a series of Trajectory Correction Maneuvers (TCM) with dedicated, but identical size and functionality to those used for small flight corrections. These propellers used to correct the trajectory are placed on the back of the probe. After the encounter with Saturn, Voyager 1 did not need them, the last use being on November 8, 1980. These propellers had been used in a different way, meaning they were operating for long periods, not for very short-lived pulses. All engines on board Voyager were produced by Aerojet Rocketdyne, the same type of engine being installed on other spacecraft such as Cassini and Dawn. On November 28, Voyager engineers started the four TCM engines and tested their ability to steer the probe using 10 millisecond pulses. Researchers were then forced to wait for the test results to travel through space, in the form of radio waves, to be received after 19 h and 35 min by an antenna from Goldstone, California, part of NASA’s Deep Space network.

Keywords: Voyager 1, Propellers, NASA, Space Agency.

Introduction

If you try to start a car that has been in the garage for decades, you expect the engine not to respond. But a set of propellers onboard the NASA Voyager 1 spacecraft was launched on Wednesday, November 29, 37 years after its last use without any problems.

Voyager 1 is the only man-made object that has arrived in interstellar space, being also the space probe created by NASA, which travels at the highest speed and is at the highest distance from Terra. The probe flies for 40 years and can change its position to keep its antenna pointing to the Terra using some small propellers operating in very short halves, in the order of milliseconds.

NASA’s Voyager team has been able to launch a set of back-up propellants that had not been in use since 1980. The test succeeds in extending Voyager 1’s life to a minimum of 2-3 years.

In 2014, NASA engineers noticed that Voyager’s propellers used to change direction degraded. Over time, propellers end up working longer than normal to get the same effect on the direction of the probe. NASA experts have designed several working scenarios to solve the problem and concluded that it is best to use a series of back-up engines to control the probe’s direction.

These propellants had not been used for 37 years. NASA has been forced to search for decades old archives and use an obsolete programming language that no one uses to compile commands that have been transmitted by radio waves to the small computer on board to Voyager 1.

The probe is more than 20 billion km from Terra. In the early years of the mission, Voyager 1 passed past Jupiter, Saturn and some of the satellites of these planets.

In order to maintain the correct distance and orientation of on-board instruments, engineers used a series of Trajectory Correction Maneuvers (TCM) with dedicated, but identical size and functionality to those used for small flight corrections. These propellers used to correct the trajectory are placed on the back of the probe.

After the encounter with Saturn, Voyager 1 did not need them, the last use being on November 8, 1980. These propellers had been used in a different way, meaning they were operating for long periods, not for very short-lived pulses. All engines on board Voyager were produced by Aerojet Rocketdyne, the same type of engine being installed on other spacecraft such as Cassini and Dawn. On November 28, Voyager engineers started the four TCM engines and tested their ability to steer the probe using 10 millisecond pulses. Researchers were then forced to wait for the test results to travel through space, in the form of radio waves, to be received after 19 h and 35 min by an antenna from Goldstone, California, part of NASA’s Deep Space network. On November 29, engineers learned that the engines worked perfectly. Now, the plan is that, as of January, Voyager 1 will only use these four propellers to target the antenna to Terra. These engines need heat to propel the probe and heat is obtained using the energy supplied by a small nuclear reactor that has as fuel a plutonium isotope. As the energy reserve is limited, engineers are planning to use these propellers for a limited period of time and will reuse start-up motors to orient the antenna when the energy reserves are too low.

Voyager 1 is a space probe launched by NASA on September 5, 1977, still in operation. He visited the planets Saturn and Jupiter, being the first probe to transmit images to the satellites of these planets. Her sister, Voyager 2, was launched on August 20, 1977 (before Voyager 1) is the only probe that has visited all four large planets of the Solar System: Jupiter, Saturn, Uranus and Neptune, due to the alignment of these planets. After completing their initial plan to study the planets, the two probes continued their journey into space. Voyager 1 left the heliosphere in August 2012, entering the interstellar space. Voyager 2 will follow in a few years (Petrescu et al., 2017a; 2017b; 2017c; 2017d; 2017e; 2017f; 2017g; 2017h; 2017i; 2017j; 2017k; 2017l; 2017m; 2017n; 2017o; 2017p; 2017q; Petrescu, 2016; Aversa et al., 2017a; 2017b; 2017c; 2017d; 2017e; 2016a; 2016b; 2016c; 2016d; Mirsayar et al., 2017; Petrescu and Petrescu, 2016a; 2016b; 2016c; 2013a; 2013b; 2013c; 2013d; 2012a; 2012b; 2012c; 2012d; 2011a; 2011b; Petrescu, 2012a; 2012b; 2009; Petrescu and Calautit, 2016a; 2016b; Petrescu et al., 2016a; 2016b).

Materials and Methods

The National Aeronautics and Space Administration (NASA) is the most renowned and important independent agency of the federal government of the United States, responsible for civilian space programs as well as all aeronautical and aerospace research programs initiated by the United States of America.

President Dwight D. Eisenhower, who set up NASA in 1958, mainly thought of it as having a distinct civil (more than military) orientation in order to be able to create independent, independent missions to conquer the cosmic space, obviously the sea passion and basic mission of humanity, seen as something superior, namely to learn as much as possible about the space we live in and try to conquer it in the next millennium. We can’t be locked here on our planet and not think of who we are, where we come from, where we go, what we represent in the giant universe in which we are, how we can know it and explore it, which are its limits and how we can learn other surrounding verses. Too many questions for a small man, but very few for a humanity so big and especially so important!

A Law on Aeronautics and National Space was adopted on 29 July 1958, abolishing NASA’s predecessor, the National Aeronautical Advisory Committee (NACA). Thus, the new agency became operational on 1 October 1958.

Immediately, the agency got its rights and started to work, most US space exploration efforts have since been run by NASA, including the Apollo Moon landing missions, the Skylab space station and, later, the space shuttle. The years have passed and NASA has been constantly working with perseverance, sending reconnaissance missions on the Moon, Mars and then on all the planets of the Solar System.

At present, NASA is the main space pillar that supports the international space station and oversees the development of the Orion multi-purpose crew, space launch and commercial crew vehicles at the same time.

The Agency is also responsible for the Service Launch Program (LSP), an important program to oversee launch operations and especially the management of special NASA launch programs.

NASA is also the world’s leading international operator, focusing on space exploration programs not only on flights, but also through the creation and use of more and more powerful telescopes capable of scans the whole universe we are in. It is important to know the planets that could provide living conditions for mankind, with the obvious task of conquering, tertifying and colonizing them. The expansion of humanity in outer space has become imperative because our planet’s population is constantly multiplying while planet resources diminish.

If modern telescopes have the role of exploring cosmic space and finding planets that can provide life, there is a need for huge, fast ships that can travel to them in real time.

NASA is also focusing on a better understanding of the Earth through the Earth Observation System, advancing heliophysics through Heliophysics mission research efforts, exploring bodies throughout the solar system with robotic spaceship missions such as New Horizons and exploring astrophysical topics like the Big Bang through the Observatory Sea and associated programs (NASA, From Wikipedia).

Results

Voyager 1 is one of two NASA spacecraft launched on September 5, 1977, to study the outer planets of the Solar System that had previously been observed only by telescopes on Earth. This mission is made possible by an exceptional alignment of the outer planets that only happens every 176 years. The main objective of Voyager 1 is to collect data on the systems of Jupiter and Saturn with a particular emphasis on the main moon of the latter, Titan.

Voyager 1, with its twin probe, is at the origin of a large number of discoveries on the Solar System sometimes calling into question or refining the existing theoretical models.

As such, it is one of the most successful space missions of the US Space Agency. Among the most remarkable results are the complex functioning of Jupiter’s Great Red Spot, the first observation of Jupiter’s rings, the discovery of Io’s volcanism, the strange structure of the surface of Europe, the composition of the atmosphere of Titan, the unexpected structure of the rings of Saturn as well as the discovery of several small moons of Jupiter and Saturn.

The spacecraft is very long-lived and still has operational instruments in 2015 that collect scientific data on the environment. It left in August 2012 the heliosphere – the region of space under the magnetic influence of the Sun – and is now progressing in the interstellar medium. Starting in 2020, however, the instruments will have to be phased out in order to cope with the weakening of its source of electrical energy, supplied by three thermoelectric radioisotope generators due to the distance of the Sun. Voyager 1 will no longer be able to transmit data beyond 2025.

Voyager 1 is a 825.5 kg space probe (propellant included) centered around a huge 3.66 m diameter parabolic antenna whose size is intended to compensate for the remoteness of the Earth. It carries ten scientific instruments representing a mass of 104.8 kg, part of which is located on a steerable platform. As of October 11, 2017, the spacecraft is approximately 20,944,040,000 km (140 ua) from the Sun and approximately 21,008,710,000 km (140.43 ua) from the Earth.

Voyager 1 is, along with Voyager 2, one of the two probes composing the Voyager program. This space program is set up by the American Space Agency (NASA), to explore the outer planets (Jupiter, Saturn and beyond) which have not been studied so far because of the technical complexity of a such project. The space agency wishes to take advantage of an exceptional conjunction of the outer planets which is only repeated every 176 years and which must allow the probes to fly over several planets practically without spending fuel, by using the gravitational assistance of the objects previously visited.

After giving up budget reasons for a very ambitious project, NASA manages to build two machines perfectly suited to this complex program, as will be proven by the longevity and quality of the scientific equipment collected by the two probes. The project was officially launched on 1 July 1972 and the manufacture of space probes began in March 1975 with the completion of the design phase. The Pioneer 10 (launched in 1972) and 11 (1973) probes, which are responsible for recognizing the route, provide vital information on the shape and intensity of the radiation around the planet Jupiter that is taken into account in the design of the Voyager.

The objective of the Voyager program is to collect scientific data on the outer planets (Jupiter, Saturn, Uranus and Neptune) which at the time were virtually unexplored: Only Pioneer 10 and 11, light probes developed to serve as scouts at Voyager probes but with few instruments, have so far approached Jupiter and Saturn. The main objective assigned to both probes is to collect data to better understand the two giant planets, their magnetosphere and their natural satellites. The latter, some of which are the size of a planet, are very poorly known. The study of the moon Titan, which is already known at the time that it has an evolved atmosphere, is considered as important as the exploration of Saturn, its mother planet. Finally, the collection of data on the two other giant planets of the Solar System, Uranus and Neptune, on which very little information is acquired because of their remoteness, constitutes a major objective insofar as the study of Jupiter and Saturn could be completed.

Voyager 1, which precedes its twin probe, has for initial objective to explore Jupiter and Saturn. It must complete its exploration mission by flying close to Titan, the main moon of Saturn. But it must, to achieve this, perform a maneuver that makes him leave the plane of the ecliptic, excluding any possibility of exploring another outside planet. The overflight and study of Uranus and Neptune are thus entrusted to Voyager 2. To pass from Jupiter to Saturn, the probe uses the gravitational assistance of the first planet which gives it a significant acceleration while placing it in the direction of the second.

Given their good operational status at the end of their primary mission in 1989, new targets were set for space probes after flying over the outer planets. The aim of the Voyager Interstellar Mission (VIM) mission is to study very poorly known regions located at the limits of the Sun’s influence zone. The final shock and the heliopause are distinguished before, once the heliogaine crossed, to emerge in the interstellar medium whose characteristics no longer depend on our star.

Voyager 1 is a 825.5 kg probe (propellant included) whose central part consists of a flattened aluminum cylinder with ten lateral facets with a diameter of 188 cm and a height of 47 cm. This structure contains most of the electronics protected by a shield and a tank in which is stored the hydrazine used for propulsion. A parabolic dish with a fixed gain of 3.66 m in diameter is attached to the top of the cylinder. Its large size allows an exceptional 7.2 kilobits per second X-band rate at Jupiter’s orbit and partially offsets the weakening of the signal at the Saturn orbit. Voyager 1 has sixteen small redundant boosters burning hydrazine and used for both trajectory changes and

Posted in Uncategorized | Comments Off on NASA Started a Propeller set on Board Voyager 1 After 37 Years of Break