Go to my Home page.
Go to my Joy of High Tech page.

The Joy of High Tech

by

Rodford Edmiston

Being the occasionally interesting ramblings of a major-league technophile.



Please note that while I am an engineer (BSCE) and do my research, I am not a professional in this field. Do not take anything here as gospel; check the facts I give. And if you find a mistake, please let me know about it.

Measurement

     One of the greatest accomplishments of human technology is our accuracy in measurement. This is actually not a new development. For all of recorded history humans have been pushing the limits of the available tools and techniques in order to achieve better measurements. The foundation for the Great Pyramid was leveled and the cornerstones laid with astounding accuracy and precision.

     That last sentence, by the way, brings up an important point. Accuracy is not the same thing as precision. Put one way, accuracy is how close to right a value is, while precision is how many decimal places it has. Put another, if the local time is 12:02 PM, and someone tells you "It's about Noon," that statement has good accuracy but poor precision. If they tell you "It's 7:53:02" then they are providing great precision but poor accuracy. Consistency is also a part of accuracy; that is, how close are the answers from a series of measurements of the same thing. A measuring tool which gives widely different answers to the same question about the same object under the same conditions is generally useless.

     Accuracy and precision both should be evaluated as fractions of the whole, such as parts in a thousand. Which means the scale of the measurement is important. Over a distance of several kilometers, being within a few centimeters (that is, to within a few parts in a hundred thousand) is pretty good accuracy. Being half a millimeter off in placing a transistor on an integrated circuit chip isn't. For lengths of multiple kilometers, rounding the measurement to a whole meter is probably precise enough. On the other hand, if you are referring to a diffraction grating, even if the measurements are in centimeters you better have a lot of decimal places.

     Achieving high accuracy and precision in measuring long distances is surprisingly easy. With a basic knowledge of geometry, a few simple tools and a reasonable amount of repetition, surveyors can be accurate to within one part in 20,000 or so. Professional surveyors with theodolites and calibrated measuring tapes can place benchmarks to an accuracy of about one part in 60,000. This sort of accuracy astounds people not familiar with surveying. They often protest that (for example) the ancient Egyptians couldn't have surveyed their monuments so accurately with the tools they had. (Heinlein plays on this in Farnam's Freehold, when the people of a technologically advanced future society express skepticism about to the titular character's information on how accurately USGS benchmarks are placed.)

     The secret - which anyone who has taken a surveying course knows - is repetition. You survey a point between known bench marks or come back to your starting point, find your accumulated error and calculate how much you are off. If the error is too great, you re-survey, going the other direction. Using these methods, even college students in a hurry to complete a surveying camp project with battered and worn old tools over rough, wooded terrain can easily exceed an accuracy of one part in 10,000.

     Above I mentioned calibration. To calibrate, you must have a standard to calibrate against. Any competent engineer will tell you, if a standard exists, even an arbitrary one, use it, unless you have a very good reason not to. Historically, standards of length are based on the length of some body part of a monarch. (Now stop that...) The English foot, for instance, is based on the length of the foot of a monarch who happened to have very large pedal extremities. Weight standards were often such things as a large number of a specific type of small seed. (Using a large number of items means there is a good chance the numbers of unusually large and small ones average out, and makes the result more accurate and reproducible.) Volume measures were generally based on one or both of the above.

     Ancient Egyptian monuments often feature a portrayal of linear measurement standards. A life-size relief of a pharaoh or god is shown with arms outstretched, and lines crossing the body at various points to indicate the measurement standards. (Often the crossing lines are omitted, since the meaning was presumably understood.) The cubit, the hand, the finger and other Egyptian measures were surprisingly constant down through the ages, at least within the same region, though there were often more than one of each type in use at the same time. (The royal and common cubits, for instance.)

     Volume standards are a little trickier, but have been found. Weight standards (artificial ones, as opposed to the natural ones mentioned above) are perhaps the most durable, and the easiest to produce with reasonable accuracy and precision. Scale weights are found in many archeological digs, and those from a particular region and era are surprisingly consistent.

     Simple scales can be quite accurate and precise. I reload my own ammunition, a practice which requires high accuracy and precision over a wide range of weights. A few years ago I decided to have my powder scale calibrated at the local office of weights and measures. This scale was inexpensive, around $40, and made mostly of plastic and stamped sheet steel. Yet it was so accurate and precise that the metrologist who performed the calibration was impressed. When you consider that a difference of a tenth of a grain of powder (a grain being 1/7000th of a pound, or 0.0648 gram) can cause a change in bullet impact of over a centimeter at 100 meters, you can see that these scales must perform their function well.

     Standards for other types of measurement also exist in history, but are generally more subjective. A touchstone is a rock on which a piece of gold or silver is rubbed. The color of the mark left gives an accurate estimate of the actual precious metal content of the object, but a trained eye is needed to judge this. Today, of course, there are several accurate, repeatable methods of analyzing alloys.

     Measurement techniques pretty much stayed the same for thousands of years. Oh, there were some attempts to refine measuring methods, or to create standards based on nature. Mechanical clocks, for instance, simply divide time into smaller increments than the sundial or hourglass, and parcel them out more uniformly. Twelve hour days date back to ancient Sumeria. The development of clocks with long-term accuracy and precision is a subject for a whole, 'nother column.

     With the invention of the thermometer came the first tool specifically intended to measure temperature, and the first new concept in millennia of what could be measured. (Galileo had a simple thermometer, around 1593, but the world needed until 1714 for Daniel Fahrenheit to make the first real, practical device.) Once the idea of measuring temperature escaped into the scientific community, several people began developing standards of measurement at the same time. This is why we have the Centigrade/Celsius and Fahrenheit scales. Only later, when molecular theory was developed, was the concept of an absolute standard - absolute zero, the lowest temperature - developed. Today most physical scientists use the Kelvin scale for temperature measurement. This starts at absolute zero and goes up, using degrees the same size as those in the Centigrade system. The standard freezing point of water - zero on the Centigrade scale - is 273 and a bit in kelvins, at least at sea level air pressure.

     The French Revolution also brought a revolution in the science of measurement. The new French government wanted to make as clean a break from the past as possible. Of course, the findings of the scientists who accompanied Napoleon to Egypt also stimulated science as a whole. New ways of doing things - scientific ways - were developed, and old ways reinvestigated and revised. Wherever possible, the resulting systems of measurement were based on natural phenomena. They did a pretty good job, even if they did get the size of the Earth wrong. About the only system the French developed which is not still widely used is their calendar. And why the United States and Britain don't use the metric system is beyond me...

     With modern equipment, measurements can be made quickly, precisely and accurately, and usually conveniently. Electronics aren't necessarily required, either. Gauge blocks - developed over a century ago - are still used today, with the Johansson Gauge perhaps being the best known of these. Johansson didn't invent the idea of a standard gauge, but he created a standardized, consistent system for making and using them. The basic form and application of gauge blocks have not changed notably in many decades, though tungsten and ceramic have partly taken over from tool steel. (Note that many automobile manufacturers - including Volvo, Cadillac and Ford - started using Johansson gauges early in their histories to make standardization of parts easier. Ford actually purchased the American branch of Johansson's company in 1923. With an accurate, precise and consistent means of measuring, making each bore hole or wheel rim the same size was much easier. This is where Eli Whitney failed in his attempts at doing the same thing with firearms decades earlier. He couldn't measure accurately and consistently enough.)

     To measure with a gauge block you simply select a combination of blocks (using the fewest in combination you can, in a process called stacking) which adds up to the length you want, "wring" the blocks by rubbing the appropriate ends against each other to squeeze the air out from between them, and compare the length to the target. One interesting feature of gauge blocks is that the surfaces are so smooth and flat that once they are mated in this way they stay together unless considerable effort is used to part them. The combination of the adhesive action of the ultra-thin film of preservative oil or moisture on the blocks and the molecular attraction, or bonding, between the very flat and parallel mating surfaces, will actually hold them together.

     There are several classifications of gauge blocks, depending on how precise and accurate they need to be. A laboratory or master set is typically accurate to within .000002 (one part in half a million). These are normally used only in temperature-controlled labs as references to compare or check the accuracy of other gauges. Next come the inspection sets, accurate to within .000004 to .000002, and used to inspect the accuracy of working sets. These last - the working gauges - are accurate to .000006 to .000002. These are used in shops for machine tool setups, layout work and measurement and to calibrate adjustable gauges, such as micrometers and verniers.

     Most gauge blocks are made of tool steel, chromium carbide (Croblox) or tungsten carbide. These days ceramic gauge blocks are becoming popular. Both temperature stability and wear resistance are very important. (My thanks to Tom Lipton for his corrections on the materials used for gauge blocks.)

     Note my comment above about temperature-controlled labs. Physical characteristics are related. Normally, temperature variations don't significantly alter measurements, but when measuring to within one part in several million in a true metrology laboratory, the temperature of the room, and all its contents, is set at 20 degrees C and is held there to within 0.25 degrees C. Ted Doiron, a physicist in the precision engineering division of the National Institute of Standards and Technology in Gaithersburg, MD, often refers to something he calls Doiron's Law of Dimensional Metrology: "The guy with the best thermostat wins!" It's simple, but very true: in millionths measurements, temperature is everything.

     Gauge blocks are still the industry-standard length masters. They are used daily in a broad spectrum of applications, from measuring parts to relatively loose tolerances on the factory floor to measuring to a few parts in a million in an environmentally controlled metrology laboratory. Once you're familiar with stacking and wringing, there are three gauge-block preparation steps you should take each time you are going to make a measurement.

     Using clean and demagnetized gauge blocks is paramount. A gauge block that has grease and grime on it will be inaccurate, and even traces of dirt can cause excessive wear. Most cleaning jobs can be accomplished by wiping each block with a soft, lint-free cloth moistened with mineral spirits. You must also demagnetize all blocks that retain a magnetic field. Good electronic demagnetizers and gauss gauges are common catalog items today. Eliminate nicks and burrs. Gauge blocks require good overall geometry to measure accurately. Nicked, burred, or scratched measuring faces will not wring together well and will most likely provide anomalous readings.

     Deburring the measuring faces of steel gauge blocks by lightly "swiping" them along the face of a clean, flat, serrated, Arkansas or granite stone is ideal, using clean mineral spirits as a carrier. This procedure, performed correctly, will not hinder the quality or integrity of the gauge blocks. Note that different block materials will require other types of deburring stones. Maintain temperature. Varying environmental temperatures affect material size, especially when using two different materials in your application. For example, if the gauge blocks are steel and the part is aluminum, thermal expansion/contraction causes the two materials to move different amounts at different rates of speed. If the temperature is fluctuating as well, the problem is compounded. Use of gauge blocks in the temperature-controlled atmosphere of the metrology laboratory yields the most reliable measurements. However, proper handling will allow accurate use on the plant floor as well.

     Measurements, as mentioned above, are interrelated. Mass depends on volume and density. Measuring volume depends on measuring length. Measuring temperature depends on measuring changes in length or volume or conductivity. And so on. At some point, there have to be standards for one or more types of measurement so comparisons can be made. Throughout most of history, measurement standards were arbitrary and often unrepeatable, also as mentioned above. In some cases a master unit was created, such as the platinum alloy bar which for over a century was the meter, or another one which was the kilogram.

     However, in our modern era some standard is desired besides an arbitrary lump of metal. Today, the length standard is a certain number of wavelengths of a specific frequency of light, and the time standard a certain number of a specific type of nuclear reactions. (Remember my comment above about how using a large number of small things in a standard makes the result more accurate and reproducible?) Using these standards has revolutionized physics, both large scale and small. You may not think that a researcher being able to verify results to one part in a few trillion could possibly have any impact on everyday life, but it does, if indirectly. Building a Gigahertz processor for a computer requires astoundingly high levels of accuracy and precision in both the design and the manufacturing of the chip. Yet we not only achieve this goal, we do it repeatedly, for thousands upon millions of identical chips.

     Today we have measuring instruments capable of fantastic feats. A project I read about recently involves launching a satellite to orbit the Earth and measure its frame drag. That is, the infinitesimal effect the rotation of our planet's mass produces on the structure of space-time around it. The mission is Gravity Probe B, and it contains four quartz spheres spun at high speed in a liquid-helium dewar to act as gyroscopes. The precision of angular measurement for this system is 0.1 milliarcsecond. Optical sensors will match the gyroscopes against a distant guide star as a reference. The satellite is expected to measure frame drag to within less than one percent of actual value. (The shipment of the Gravity Probe B space vehicle to Vandenberg Air Force Base is currently targeted for July 10, 2003. Launch date is presently scheduled for no earlier than November 6, 2003. The space vehicle will be launched using a Delta II rocket from Vandenberg in Southern California. Mission life is expected to be approximately 16 months. So keep an eye out for mention of the project.)

     For decades most scientists thought this subtle effect would have to be measured indirectly, such as by making observations of matter falling into a black hole. LAGEOS I and LAEGOS II have made indirect measurements around the Earth which appear to provide a value for the planet's frame drag which is about 10% greater than what general relativity predicts. This is almost as astounding a technological feat as GP B.

     One of the most valuable skills an engineer can have is knowing when to stop. With measurement, that means knowing how many decimal places is enough. For large structures - an office building or a bridge - half a centimeter may be close enough. For drinking water, a few parts per million. For nuclear power plants, a few parts per billion. Not enough decimal places, and the building may fall apart, someone get sick, or the reactor melt down. Too many and the bid will be too high. Either way, the engineer is out of work.

     However, once the number of decimal places required is determined, measurements must still be made to at least that standard. (Remember: measure twice, cut once.) So let's keep in mind the engineers of measurement, the metrologists, and how important they are not just to us, but to civilization throughout history.

     This document is Copyright 2002 Rodford Edmiston Smith. Anyone desiring to reprint this material must have the permission of the author, who can be reached at: stickmaker@usa.net