A friend of mine asked me to explain science using a "top down" approach. Here's what I came up with:
Science is fundamentally a systematic way of determining "the rules." We posit the existence of underlying physical law (how the universe works) and science seeks to approximate that law with models. Models include theories, laws, hypotheses and facts. Commonly, theory and law refer to models that are accepted and largely validated (even if there are more accurate models). Facts are assertions that are modelled to be (believed to be) logically true. As you can tell, there's not a clear line differentiating theories, facts and laws in common usage. It doesn't really matter, though, because they're all models.
Science seeks to determine the models that best fit our observations of the world. The "truest" models, if you will. It does so through a process of refinement called the scientific method. A model, called a hypothesis, is tested through observation -- checking it's predictions against observations of the physical world. Generally, experiments are constructed to see how accurate a model is. If a model's accuracy cannot be determined through observation, the model is not "falsifiable" and is thus not scientific. That's not to say it's not in some sense true, but it means the model's accuracy can't be measured. (Note that traditionally people talk of disproving hypotheses, but scientific models are not right or wrong, they are more or less accurate).
The classic non-scientific statement is "God exists." Taking God to mean "an omniscient, omnipotent being," there's no way to measure God's existence. By being omnipotent and omniscient, God can exist but prevent any observation of evidence of God's existence. Since it's not falsifiable, the existence of God is not a scientific question.*
*I don't take this to mean that God doesn't exist, just that such existence is outside the purview of science. Some people do believe that non falsifiable statements cannot be true.
Often times, just observing the world is it happens isn't enough determine how accurate a model is. There are often too many complicating factors like weather that constantly change. So scientists set up experiments that attempt to control as many factors as they can. Then they vary the factors to see if they have any impact on the result.
It turns out that our best scientific models all have drawbacks. Quantum Mechanics doesn't do gravity right and is too computationally expensive to be applied on large scales. General Relativity works well on large scales but not small ones and it doesn't do gravity entirely right, either. Newton's laws work really well on human scales, but not so well on really small or large or fast moving scales. So we have a wide variety of models to describe a wide variety of things.
That's the root of science. 1) The universe has rules. 2) Models approximate those rules. 3) Models are tested and their accuracy measured through observation, particularly experiments. 4) Models are refined or new models created to be more accurate approximations of the underlying universal rules.
Math (and I'm including logic) is a formal way to describe relationships. As such, scientific models inevitably end up with mathematical descriptions.
Engineering shows up too. It's the application of the scientific method and scientific models to create new objects -- be they physical items like airplanes, information like software or social structures like governments. Without the scientific side, it's not engineering, it's artisanship. Another way to look at is that engineering is the application of scientific method and knowledge to art. So, naturally, engineers are both scientists and artists.*
*Yes, this is an idealization. But, really, without the scientist component an engineer is an artisan or an artist. And an engineer that doesn't create is more scientist or analyst. There is no shame in any of these occupations, though.
Monday, May 30, 2011
Sunday, May 29, 2011
Wait, Doctors don't use checklists?
Atul Gawande gave a commencement speech at Harvard Medical School discussing how to improve the health care system. A central theme is that medicine has grown to be very complex and the doctors don't handle the complexity well. I'm rather surprised that doctor's don't normally use checklists for complicated activities.
It strikes me as obvious that they should. Dr. Gawande has apparently written a book to this effect, too.
There's also reddit discussions for those interested.
Edit: Dr. Gawande has another article about the use of checklists in medicine and the thousands of lives they've saved in very limited application. It sounds like there's not a strong engineering mindset in the medical field.
It strikes me as obvious that they should. Dr. Gawande has apparently written a book to this effect, too.
There's also reddit discussions for those interested.
Edit: Dr. Gawande has another article about the use of checklists in medicine and the thousands of lives they've saved in very limited application. It sounds like there's not a strong engineering mindset in the medical field.
Friday, May 27, 2011
Supernovae and Civilizations
Phil Plait recently mentioned that a supernova needs to be less than 100 lightyears away to be harmful to our planet/civilization.
This was in context of a discussion about supernovae in the Trumpler 15 cluster, a "collection of thousands of stars packed into a volume of space only a few light years across." It seems to me that such clusters would be hostile to civilizations like ours developing.
If a large proportion of systems which otherwise could give birth to a civilization are in clusters with a high rate of supernovae, we should lower the one of the "fraction of life/civilization" coefficients in the Drake equation.
Additionally, if the worlds with many nearby interstellar neighbors are the same worlds most likely to be affected by supernovae, then the fraction of civilizations that become star faring should be reduced as the civilizations that are safest from supernovae have the largest distances to overcome to become starting.
This was in context of a discussion about supernovae in the Trumpler 15 cluster, a "collection of thousands of stars packed into a volume of space only a few light years across." It seems to me that such clusters would be hostile to civilizations like ours developing.
If a large proportion of systems which otherwise could give birth to a civilization are in clusters with a high rate of supernovae, we should lower the one of the "fraction of life/civilization" coefficients in the Drake equation.
Additionally, if the worlds with many nearby interstellar neighbors are the same worlds most likely to be affected by supernovae, then the fraction of civilizations that become star faring should be reduced as the civilizations that are safest from supernovae have the largest distances to overcome to become starting.
Monday, May 23, 2011
Drop it like it's hot (from the Moon)
Yet another discussion on space solar power on reddit pointed out the difficulties (both real and imagined) of transferring energy harvested in space back to Earth. Since this particular discussion centered around transferring energy from the Moon, I wondered if there might be some other way to transmit the energy from a body such as the moon.
What if we drop some sort of energy storage device from the moon onto Earth. Since Luna* is high up in Terra's gravity well and has a low escape velocity, doing so shouldn't be too difficult. So the question becomes, "What sort of energy storage device?"
*What are the preferable names between Luna and Moon and Terra and Earth?
Three possibilities came to mind: superconducting electromagnetic storage, fly wheel storage and thermal storage. All three benefit from something space has in abundance -- vacuum. Vacuum combined with low background temperature makes achieving the low temperatures necessary for superconducting storage a matter of the proper shading, at least during the trans-Terra flight. Flywheels don't lose energy due to drag and thermal storage benefits from the ease at which a vacuum thermos can be created.
Presumably the cost of transporting material to the moon would be fairly high, so I decided to limit my consideration to devices that could be constructed on the moon. I started with the flywheel. Aluminum is very common on the moon, so I came up with the maximum flywheel storage for a very strong Aluminum-Lithium alloy, Weldalite 048-T8. I came up with (and this is just the mass of the flywheel, not any of the parts needed to get it safely to Earth) an energy density of 273.1 kJ/kg. That's about 1/100th the energy density of coal, though extraction is more efficient, so call it about 1/60th the usable energy density of coal.
Waving aside all of the considerable other technical challenges, is the low energy density a show stopper for aluminum flywheel based interplanetary energy transfer?
Wikipedia gives world energy use as about 474 exajoules. That would require 1735 trillion kg of aluminum per year. While that doesn't push things in the impossible realm (Luna has a lot of aluminum), I'd say it's definitely infeasible. Particularly when you add the inefficiency of due to extracting the aluminum from regolith, at least 50 MJ/kg.
Well, maybe one of the other options is more workable. I hope to actually detail my (really simple) analysis of this, um, out of the box, idea and look at thermal storage in an upcoming post.
What if we drop some sort of energy storage device from the moon onto Earth. Since Luna* is high up in Terra's gravity well and has a low escape velocity, doing so shouldn't be too difficult. So the question becomes, "What sort of energy storage device?"
*What are the preferable names between Luna and Moon and Terra and Earth?
Three possibilities came to mind: superconducting electromagnetic storage, fly wheel storage and thermal storage. All three benefit from something space has in abundance -- vacuum. Vacuum combined with low background temperature makes achieving the low temperatures necessary for superconducting storage a matter of the proper shading, at least during the trans-Terra flight. Flywheels don't lose energy due to drag and thermal storage benefits from the ease at which a vacuum thermos can be created.
Presumably the cost of transporting material to the moon would be fairly high, so I decided to limit my consideration to devices that could be constructed on the moon. I started with the flywheel. Aluminum is very common on the moon, so I came up with the maximum flywheel storage for a very strong Aluminum-Lithium alloy, Weldalite 048-T8. I came up with (and this is just the mass of the flywheel, not any of the parts needed to get it safely to Earth) an energy density of 273.1 kJ/kg. That's about 1/100th the energy density of coal, though extraction is more efficient, so call it about 1/60th the usable energy density of coal.
Waving aside all of the considerable other technical challenges, is the low energy density a show stopper for aluminum flywheel based interplanetary energy transfer?
Wikipedia gives world energy use as about 474 exajoules. That would require 1735 trillion kg of aluminum per year. While that doesn't push things in the impossible realm (Luna has a lot of aluminum), I'd say it's definitely infeasible. Particularly when you add the inefficiency of due to extracting the aluminum from regolith, at least 50 MJ/kg.
Well, maybe one of the other options is more workable. I hope to actually detail my (really simple) analysis of this, um, out of the box, idea and look at thermal storage in an upcoming post.
Saturday, May 7, 2011
Welcoming Our Microkernel Inspired Robot Overlords
2011-05-08
"Welcoming Our Microkernel Inspired Robot Overlords"
Operating system kernels perform roles quite analogous to governments. They control the distribution of resources, set and enforce rules and control rogue processes. As such, I think the microkernel/monokernel debate has an analogy in governmental theory.
Would government be improved if the central government only managed smaller, more specialized governments? States' rights groups and other supporters of federalism would likely think so. Traditional American federalism isn't the only governmental architecture that would map onto microkernels and their helper daemons, though. A set of smaller, specialized legislative bodies with authority defined by area of expertise rather than geographical location would fit well as daemons, too. The microkernel could then be the voting populace.
Many of the arguments for microkernels map pretty well onto government as well. Limited legislatures (or other rule making bodies -- the system need not have a representative element) can be more rapidly shutdown in case of failure or bad behavior. They can also be hotswapped without bringing down the rest of the system, making it more robust and secure. Independently managed, empowered and funded specialist governments could potentially avoid the pain and inefficiency of a full government shutdown.
It seems the US government should be considered a hybrid system. The federal government and particularly the legislative and upper executive branches tending towards monolithic-ness and the state governments and executive departments bringing elements of micorkernel-ness.
The analogy isn't perfect, of course. I'm not sure how the very useful idea of checks and balances would play into the normally hiearchial OS models. Perhaps a distributed systems analogy could be useful. Still, it seems like OS architectural considerations like micro/monokernel discussions ought to be a fruitful framework for improving government design.
"Welcoming Our Microkernel Inspired Robot Overlords"
Operating system kernels perform roles quite analogous to governments. They control the distribution of resources, set and enforce rules and control rogue processes. As such, I think the microkernel/monokernel debate has an analogy in governmental theory.
Would government be improved if the central government only managed smaller, more specialized governments? States' rights groups and other supporters of federalism would likely think so. Traditional American federalism isn't the only governmental architecture that would map onto microkernels and their helper daemons, though. A set of smaller, specialized legislative bodies with authority defined by area of expertise rather than geographical location would fit well as daemons, too. The microkernel could then be the voting populace.
Many of the arguments for microkernels map pretty well onto government as well. Limited legislatures (or other rule making bodies -- the system need not have a representative element) can be more rapidly shutdown in case of failure or bad behavior. They can also be hotswapped without bringing down the rest of the system, making it more robust and secure. Independently managed, empowered and funded specialist governments could potentially avoid the pain and inefficiency of a full government shutdown.
It seems the US government should be considered a hybrid system. The federal government and particularly the legislative and upper executive branches tending towards monolithic-ness and the state governments and executive departments bringing elements of micorkernel-ness.
The analogy isn't perfect, of course. I'm not sure how the very useful idea of checks and balances would play into the normally hiearchial OS models. Perhaps a distributed systems analogy could be useful. Still, it seems like OS architectural considerations like micro/monokernel discussions ought to be a fruitful framework for improving government design.
Thursday, April 7, 2011
More on the Falcon Heavy
When I did my sanity check, I made some optimistic assumptions about the potential Falcon Heavy performance. For one, I used vacuum exhaust velocity (specific impulse) for the 1st stage prior to side booster separation, which will over estimate the delta-v for that stage. We should be able to keep using the vacuum exhaust velocity for the core stage post separation and for the upper stage, since they'll be operating in near vacuum.
To get a worst case estimate of the needed second stage mass fraction, we can use the deliberately pessimistic sea level exhaust velocity for the entire pre separation first stage.
This reduces the delta-v for the first phase to about 2964 m/s. Add in a less charitable (pessimistic, even) estimate of gravitational and aerodynamic losses, say -1400 m/s, and the velocity at upper stage separation is only 6450 m/s. That leaves 1338.6 m/s for the second stage to make up. Remember, we can use this to solve for the post burn out mass of the second stage. We get 48244 kg -- less than the payload mass.
A more realistic lower bound for performance would be provided by using the average of the sea level and vacuum exhaust velocities for the pre separation 1st stage rocket equation. That average is 2840.5 m/s, which yields a booster burnout velocity of 3471.7 m/s. Plugging this back into our previous calculations (including losses and boost due to the rotation of the Earth) and we end up with an upper stage velocity of 6959.1 m/s. The upper stage now only has to add 830.9 m/s to reach the orbital velocity of 7790 m/s.
Remember, we find the burnout mass from the initial mass, exhaust velocity and delta-v. It's 56127 kg, leaving 3127 kg of structure after the 53000 kg payload is accounted for. That gives a structural fraction of about 1/23 (0.0435) for the upper stage, which is similar to the Falcon 9 core stage structural fraction. Upper stage structural fractions tend to be higher (worse) than core or lower stages because vacuum optimized bell nozzles are longer and thus heavier than nozzles optimized to for liftoff. The payload interface may create additional structural mass as well. Still, if SpaceX can achieve a structural fraction of 1/30 for the side boosters, they may well be able to get 1/23 for the upper stage.
We can tweak the variables in a few other ways to see how tight the constraints on the Falcon Heavy really are. What if they boosters aren't quite as lightly built? What if the core stage is less massive than we've estimated? Launch vehicles are very sensitive to structural mass.
Still, from the information available, Falcon Heavy looks likely to be a workable vehicle. The unknown we've estimated, upper stage mass fraction, is likely to be closer to 1/23 than 1/11.
To get a worst case estimate of the needed second stage mass fraction, we can use the deliberately pessimistic sea level exhaust velocity for the entire pre separation first stage.
This reduces the delta-v for the first phase to about 2964 m/s. Add in a less charitable (pessimistic, even) estimate of gravitational and aerodynamic losses, say -1400 m/s, and the velocity at upper stage separation is only 6450 m/s. That leaves 1338.6 m/s for the second stage to make up. Remember, we can use this to solve for the post burn out mass of the second stage. We get 48244 kg -- less than the payload mass.
A more realistic lower bound for performance would be provided by using the average of the sea level and vacuum exhaust velocities for the pre separation 1st stage rocket equation. That average is 2840.5 m/s, which yields a booster burnout velocity of 3471.7 m/s. Plugging this back into our previous calculations (including losses and boost due to the rotation of the Earth) and we end up with an upper stage velocity of 6959.1 m/s. The upper stage now only has to add 830.9 m/s to reach the orbital velocity of 7790 m/s.
Remember, we find the burnout mass from the initial mass, exhaust velocity and delta-v. It's 56127 kg, leaving 3127 kg of structure after the 53000 kg payload is accounted for. That gives a structural fraction of about 1/23 (0.0435) for the upper stage, which is similar to the Falcon 9 core stage structural fraction. Upper stage structural fractions tend to be higher (worse) than core or lower stages because vacuum optimized bell nozzles are longer and thus heavier than nozzles optimized to for liftoff. The payload interface may create additional structural mass as well. Still, if SpaceX can achieve a structural fraction of 1/30 for the side boosters, they may well be able to get 1/23 for the upper stage.
We can tweak the variables in a few other ways to see how tight the constraints on the Falcon Heavy really are. What if they boosters aren't quite as lightly built? What if the core stage is less massive than we've estimated? Launch vehicles are very sensitive to structural mass.
Still, from the information available, Falcon Heavy looks likely to be a workable vehicle. The unknown we've estimated, upper stage mass fraction, is likely to be closer to 1/23 than 1/11.
Tuesday, April 5, 2011
SpaceX Sanity Check
Elon Musk recently made some amazing claims about the capabilities of SpaceX's Falcon Heavy rocket. He's saying that it will reduce launch costs down to 2200 dollars/kg (1000 dollars/lbm*). If he's right, that'd be truly revolutionary.
*Ugh, pound masses.
Clicking the link at the top of SpaceX's page brings up the Falcon Heavy page. Apparently, the Falcon Heavy replaces Falcon 9 Heavy.
The Falcon Heavy (no 9) promises a payload of 53,000 kg to a shuttle like low earth orbit (LEO)* for launch costs from between 80 million and 125 million dollars. This puts launch costs into the 1510 dollars/kg to 2360 dollars per kg range. That is truly phenomenal.
*200 km radius, 28.5 degrees inclination
The Falcon 9 Heavy, a very similar vehicle, is listed as being able to lift 32000 kg to the same orbit with a launch cost of 95 million dollars, or 2970 dollars/kg. This is still great.
Confusingly, both vehicles are referred to as "Falcon Heavy" on their respective webpages. They're clearly different vehicles, with Falcon Heavy being listed as taller, more massive and having somewhat greater thrust on liftoff. I presume it's a design evolution of the Falcon 9 Heavy with uprated Merlin engines and an improved second stage.
I hate to say it, but this performance seems too good to be true. So I'd like to get a feel for how close the Falcon Heavy comes to violating the fundamental physical constraints on what a rocket can do. So I'll do a sanity check.
SpaceX lists the structural fraction of the side boosters as 1/30*. Space Launch Report estimates the empty mass of a Falcon 9 1st stage as about 16600 kg. That gives a loaded mass of 498000 kg for each of the side boosters with 481400 kg of propellant in each. Presumably, the core booster is very similar to a stock Falcon 9 in structure and tankage. Space Launch Report estimates 315500 kg of propellant in it.
*This is very impressive. Space Launch Report puts the structural fraction for the first stage of the Falcon 9 Block 1 that's flown at about 1/22. So, SpaceX has some improving to do to reach 1/30.
Each of the Falcon Heavy's 27 Merlin engines should produce 630 kN of thrust. At a specific impulse of 275 s (exhaust velocity of 2698 m/s), that gives a propellant consumption of 233.53 kg/s per engine. This gives a side booster burnout (when all of their fuel has been used*) time of 152.7 seconds after launch -- Which is very reasonable.
*The Falcon Heavy is said to have a cross feed system that allows the propellant in the side boosters to be drained completely before using the propellant in the core booster.
The 9 remaining engines of the core stage can continue burning for another 150.1 seconds. How serendipitous that the burnout times are so similar.
Now we can use the rocket equation to calculate the maximum change in velocity (in the absence of atmospheric and gravity drag) for each stage. The rocket equation* is
where
*I'm using a Firefox script called TeX The World to render the equations. If you know of a better way to handle equations, please let me know.
At launch, the mass of the Falcon Heavy is 1400000 kg. That's
Now we shed the empty boosters ( 16600 kg each ), leaving a mass of 404000 kg for
The velocity required to enter a circular orbit is
where r is the radius of the orbit, v is the orbital velocity and
Note that, neglecting atmospheric and gravitational drag, the core stage is already travelling faster than this at burnout. And we haven't even accounted for the upper stage or the velocity bonus the rocket gets by launching eastward due to the Earth's rotation.
Optimistically estimating atmospheric and gravity losses at 1200 m/s (apparently, the Shuttle loses about 1330 m/s due to these), we'd still have a velocity of 6801 m/s before adding in the Earth's rotation. From Cape Canaveral into a 28.5 degree orbit, that boost should be about 358 m/s. So, before we've turned on the upper stage, the Falcon Heavy is travelling at about 7159 m/s: pretty close to orbital velocity.
Now things get a bit trickier. For one, the upper stage uses a different nozzle on its Merlin, improving specific impulse to 342 seconds ( 3355 m/s exhaust velocity.) Worse, we don't know the structural fraction of the upper stage, so we don't know how much fuel it has. However, we do know the payload mass, the intial mass the needed delta-v and the exhaust velocity, so we can calculate the structural fraction and see if it's a reasonable number. We just need to rearrange the rocket equation to find
The remaining mass after the core stage separates from the upper stage is 71900. The necessary delta-v is 7790 m/s - 7159 m/s = 631.0 m/s, giving us a remaining mass of 59573 kg. Subtract away the 53000 kg payload and that leaves 6573* kg for the upper stage structure. 6573 kg/71900kg gives a structural fraction of about 1/11 for the upper stage, which as we saw earlier, is not unreasonable.
*Originally, I had a remaining mass of 3573 kg. Subtraction, my only weakness
So the Falcon Heavy passes my simple sanity check on its performance claims, assuming it can achieve the 1/30 structural fraction for its side mount boosters and keep the aero and gravity losses down. If it can, maybe it really will fulfil Musk's promises and revolutionize spaceflight and make humanity a spacefaring race.
Subscribe to:
Posts (Atom)