Monthly Archives: March 2014
In theory, absolute zero is the temperature where the particles of matter stop moving. Absolute zero is impossible to achieve, because all particles move, even if it is just a small vibration. Some people have created temperatures very close to absolute zero, but the record temperature was 100 pK (Picokelvin) above absolute zero. Even getting close to absolute zero is difficult because anything that touches an object being cooled near absolute zero would give heat to the objects. Scientists use lasers to slow atoms when cooling objects to very low temperatures.
The Kelvin and Rankine temperature scales are defined so that absolute zero is 0 kelvins (K) or 0 degrees Rankine (°R). The Celsius and Fahrenheit scales are defined so that absolute zero is −273.15 °C or −459.67 °F.
At this stage the pressure of the particles is zero. If we plot a graph to it, we can see that the temperature of the particles is zero. The temperature cannot go down any further. Also, the particles cannot move in “reverse” either because as the movement of particles is vibration, vibrating in reverse would be nothing but simply vibrating again. The closer the temperature of an object gets to absolute zero, the less resistive the material is to electricity therefore it will conduct electricity almost perfectly, with no measurable resistance.
The Third Law of Thermodynamics says that nothing can ever have a temperature of absolute zero.
The Second Law of Thermodynamics says that all engines that are powered by heat (like car engines and steam train engines) must release waste heat and can not be 100% efficient. This is because the efficiency (percent of energy the engine uses up that is actually used to do the engine’s job) is 100%×(1-Toutside/Tinside), which only is 100% if the outside temperature is absolute zero which it can not be. So, an engine can not be 100% efficient, but you can make its efficiency closer to 100% by making the inside temperature hotter and/or the outside temperature colder.
In September 2013, MIT scientists cooled a sodium gas to the lowest temperature ever recorded — only half-a-billionth of a degree above absolute zero.
|Absolute zero is defined to be −273.15°C, or 0 K.|
For more information visit:-
via Blogger http://ift.tt/1g7hJKp
Holmium is a chemical element with the symbol Ho and atomic number 67 and is a rare earth element. It was discovered by Swedish chemist Per Theodor Cleve. Its oxide was first isolated from rare earth ores in 1878 and the element was named after the city of Stockholm.
It is a relatively soft and malleable silvery-white metal. It is too reactive to be found uncombined in nature, but when isolated, is relatively stable in dry air at room temperature. However, it reacts with water and rusts readily, and will also burn in air when heated.
Holmium has the highest magnetic strength of any element and therefore is used for the polepieces of the strongest static magnets.
Holmium oxide appears to have different colours depending on changes in ambient lighting. Under natural light, it’s yellow, but under fluorescent lighting, it’s pink.
|Ho2O3, left: natural light, right: fluorescent lamp light|
Holmium is used in yttrium-iron-garnet (YIG)- and yttrium-lanthanum-fluoride (YLF) solid-state lasers found in microwave equipment (which are in turn found in a variety of medical and dental settings). Holmium lasers emit at 2.08 micrometres, and therefore are safe to eyes. They are used in medical, dental, and fibre-optical applications.
Holmium is one of the colorants used for cubic zirconia and glass, providing yellow or red colouring. Glass containing holmium oxide and holmium oxide solutions (usually in perchloric acid) have sharp optical absorption peaks in the spectral range 200–900 nm. They are therefore used as a calibration standard for optical spectrophotometers and are available commercially.
For more information visit:-
via Blogger http://ift.tt/1iK9qct
Measurement today is more valuable than ever. We depend on measurement for almost everything – from time keeping to weather forecasts, from DIY work at home to heavy-duty manufacturing, industrial research and medical science.
Since measurement plays such a fundamental part in our lives, it is important that the accuracy of the measurement is fit for purpose, i.e. it fully meets the requirements of the application. Every measurement is inexact and therefore requires a statement of uncertainty to quantify that inexactness. The uncertainty of a measurement is the doubt that exists about the result of any measurement.
One way of ensuring that your measurements are accurate is by tracing them back to national standards. This method of guaranteeing a measurement’s accuracy through an unbroken chain of reference is called traceability.
Accurate measurement enables us to:
- Maintain quality control during production processes
- Comply with and enforce laws and regulations
- Undertake research and development
- Calibrate instruments and achieve traceability to a national measurement standard
- Develop, maintain and compare national and international measurement standards
Successful measurement depends on the following:
- Accurate instruments
- Traceability to national standards
- An understanding of uncertainty
- Application of good measurement practice
There are many factors that can cause inaccuracy:
- Environmental effects
- Inferior measuring equipment
- Poor measuring techniques
In the United Kingdom, the National Measurement System (NMS) is in place to enable measurements to be traced back to their national standards. As the UK’s national standards laboratory, NPL is at the pinnacle of this system guaranteeing the accuracy of physical measurements for the nation and abroad.
What is Uncertainty?
No measurement is ever guaranteed to be perfect. Uncertainty of measurement is the doubt that exists about the result of any measurement. By quantifying the possible spread of measurements, we can say how confident we are about the result.
A measurement result is only complete when accompanied by a statement of its uncertainty. A statement of uncertainty is required in order to decide if the result is adequate for its intended purpose and consistent with other similar results.
It does not matter how accurate a measuring instrument is considered to be, the measurements made will always be subject to a certain amount of uncertainty.
In order to express the uncertainty of a measurement, we need to evaluate as accurately as possible the errors associated with that particular measurement.
For example – we might say that a particular stick is 200 centimetres long, plus or minus 1 centimetre, at a 95% confidence level. This is written:
This means we are 95% sure that the length of the stick is between 199 centimetres and 201 centimetres.
Why does Uncertainty Matter?
Calculating and expressing uncertainty is important to anybody wishing to make good quality measurements.
It is also crucial where uncertainty can influence a pass or failure in a particular test, and must therefore be reported on a calibration certificate.
There are established rules for the evaluation of uncertainty. [More information can be found in NPL’s Good Practice Guide (011) ‘A Beginner’s Guide to Uncertainty of Measurement’.] Of course, we must all make every effort to ‘control’ the uncertainty in our measurements. This is done by regular inspection and calibration of our instruments, careful calculation, good record-keeping.
What is Traceability?
Traceability is a method of ensuring that a measurement (even with its uncertainties) is an accurate representation of what it is trying to measure.
What is Traceability to National Standards?
The simple and basic concept behind calibration is that measuring equipment should be tested against a standard of higher accuracy.
It should be possible to demonstrate an unbroken chain of comparisons that ends at a national standards body such as NPL. This demonstrable linkage to national standards with known accuracy is called ‘traceability’.
National standards laboratories such as NPL also routinely undertake international comparisons in order to establish worldwide consensus on the accepted value of fundamental measurement units.
Representatives of seventeen nations signed the Convention of the Metre (Convention du Mètre) on 20th May 1875 in Paris. This diplomatic treaty provided the foundations for the establishment of the Système International d’Unités (International System of Units, international abbreviation SI) in 1960. Since then, national standards laboratories have cooperated in the development of measurement standards that are traceable to the SI.
Any organisation can achieve traceability to national standards through the correct use of an appropriate traceable standard from NPL.
Who is Who in the Measurement World?
International Committee for Weights and MeasuresCIPM – Comité International des Poids et Mesures) the world’s highest authority in the field of measurement science.
International Bureau of Weights and Measures(BIPM – Bureau International des Poids et Mesures) co-ordinating body for international metrology, based in Sèvres, France.
National Physical Laboratory(NPL) is the UK’s national standards laboratory, a world-leading centre in the development and application of highly accurate measurement technology and material science.
What is the Difference Between ACCURACY and PRECISION?
The difference between accuracy and precision is illustrated below by 4 different archers… each with varying degree of ability. The bull’s-eye in the target represents the true value of a measurement.
Low accuracy and low precision (poor repeatability)
Stone age man missed the bull’s-eye and the 3 attempts were not near each other.
Low accuracy but high precision
Robin Hood’s Merry Man missed the bull’s-eye but the 3 attempts were near each other.
Higher accuracy but low precision
Native American’s 3 attempts were near the bull’s-eye, but were not near each other.
High accuracy and high precisionOlympic archer hit the bull’s-eye 3 times!
Accuracy is a qualitative term relating the mean of the measurements to the true value, while precision is representative of the spread of these measurements. Even when we are precise and accurate, there will still be some uncertainty in our measurements; the scientists challenges are to evaluate the uncertainty and make this as small as possible. When the uncertainty of a measurement is evaluated and stated, then the fitness of purpose for a particular application can be properly understood.
For more details and to download a poster with this information visit:-
via Blogger http://ift.tt/1o0zd1J
On March 7th 2009, the Kepler space observatory, designed to discover Earth-like habitable planets orbiting other stars, is launched. The spacecraft is named after the Renaissance astronomer Johannes Kepler who is best known for his laws of planetary motion.
Kepler is designed to survey a portion of our region of the Milky Way to discover dozens of Earth-size extrasolar planets in or near the habitable zone and estimate how many of the billions of stars in our galaxy have such planets.
Kepler uses a photometer that continually monitors the brightness of over 145,000 main sequence stars in a fixed field of view. This data is transmitted to Earth and analysed to detect periodic dimming caused by extrasolar planets that cross in front of their host star.
As of July 2013[update], Kepler had found 134 confirmed exoplanets in 76 stellar systems, along with a further 3,277 unconfirmed planet candidates. In November 2013, astronomers reported, based on Kepler space mission data, that there could be as many as 40 billion Earth-sized planets orbiting in the habitable zones of sun-like stars and red dwarf stars within the Milky Way Galaxy. 11 billion of these estimated planets may be orbiting sun-like stars. The nearest such planet may be 12 light-years away, according to the scientists.
For more information visit:-
via Blogger http://ift.tt/P8vEux