A friend of mine asked me to explain science using a "top down" approach. Here's what I came up with:
Science is fundamentally a systematic way of determining "the rules." We posit the existence of underlying physical law (how the universe works) and science seeks to approximate that law with models. Models include theories, laws, hypotheses and facts. Commonly, theory and law refer to models that are accepted and largely validated (even if there are more accurate models). Facts are assertions that are modelled to be (believed to be) logically true. As you can tell, there's not a clear line differentiating theories, facts and laws in common usage. It doesn't really matter, though, because they're all models.
Science seeks to determine the models that best fit our observations of the world. The "truest" models, if you will. It does so through a process of refinement called the scientific method. A model, called a hypothesis, is tested through observation -- checking it's predictions against observations of the physical world. Generally, experiments are constructed to see how accurate a model is. If a model's accuracy cannot be determined through observation, the model is not "falsifiable" and is thus not scientific. That's not to say it's not in some sense true, but it means the model's accuracy can't be measured. (Note that traditionally people talk of disproving hypotheses, but scientific models are not right or wrong, they are more or less accurate).
The classic non-scientific statement is "God exists." Taking God to mean "an omniscient, omnipotent being," there's no way to measure God's existence. By being omnipotent and omniscient, God can exist but prevent any observation of evidence of God's existence. Since it's not falsifiable, the existence of God is not a scientific question.*
*I don't take this to mean that God doesn't exist, just that such existence is outside the purview of science. Some people do believe that non falsifiable statements cannot be true.
Often times, just observing the world is it happens isn't enough determine how accurate a model is. There are often too many complicating factors like weather that constantly change. So scientists set up experiments that attempt to control as many factors as they can. Then they vary the factors to see if they have any impact on the result.
It turns out that our best scientific models all have drawbacks. Quantum Mechanics doesn't do gravity right and is too computationally expensive to be applied on large scales. General Relativity works well on large scales but not small ones and it doesn't do gravity entirely right, either. Newton's laws work really well on human scales, but not so well on really small or large or fast moving scales. So we have a wide variety of models to describe a wide variety of things.
That's the root of science. 1) The universe has rules. 2) Models approximate those rules. 3) Models are tested and their accuracy measured through observation, particularly experiments. 4) Models are refined or new models created to be more accurate approximations of the underlying universal rules.
Math (and I'm including logic) is a formal way to describe relationships. As such, scientific models inevitably end up with mathematical descriptions.
Engineering shows up too. It's the application of the scientific method and scientific models to create new objects -- be they physical items like airplanes, information like software or social structures like governments. Without the scientific side, it's not engineering, it's artisanship. Another way to look at is that engineering is the application of scientific method and knowledge to art. So, naturally, engineers are both scientists and artists.*
*Yes, this is an idealization. But, really, without the scientist component an engineer is an artisan or an artist. And an engineer that doesn't create is more scientist or analyst. There is no shame in any of these occupations, though.
Monday, May 30, 2011
Sunday, May 29, 2011
Wait, Doctors don't use checklists?
Atul Gawande gave a commencement speech at Harvard Medical School discussing how to improve the health care system. A central theme is that medicine has grown to be very complex and the doctors don't handle the complexity well. I'm rather surprised that doctor's don't normally use checklists for complicated activities.
It strikes me as obvious that they should. Dr. Gawande has apparently written a book to this effect, too.
There's also reddit discussions for those interested.
Edit: Dr. Gawande has another article about the use of checklists in medicine and the thousands of lives they've saved in very limited application. It sounds like there's not a strong engineering mindset in the medical field.
It strikes me as obvious that they should. Dr. Gawande has apparently written a book to this effect, too.
There's also reddit discussions for those interested.
Edit: Dr. Gawande has another article about the use of checklists in medicine and the thousands of lives they've saved in very limited application. It sounds like there's not a strong engineering mindset in the medical field.
Friday, May 27, 2011
Supernovae and Civilizations
Phil Plait recently mentioned that a supernova needs to be less than 100 lightyears away to be harmful to our planet/civilization.
This was in context of a discussion about supernovae in the Trumpler 15 cluster, a "collection of thousands of stars packed into a volume of space only a few light years across." It seems to me that such clusters would be hostile to civilizations like ours developing.
If a large proportion of systems which otherwise could give birth to a civilization are in clusters with a high rate of supernovae, we should lower the one of the "fraction of life/civilization" coefficients in the Drake equation.
Additionally, if the worlds with many nearby interstellar neighbors are the same worlds most likely to be affected by supernovae, then the fraction of civilizations that become star faring should be reduced as the civilizations that are safest from supernovae have the largest distances to overcome to become starting.
This was in context of a discussion about supernovae in the Trumpler 15 cluster, a "collection of thousands of stars packed into a volume of space only a few light years across." It seems to me that such clusters would be hostile to civilizations like ours developing.
If a large proportion of systems which otherwise could give birth to a civilization are in clusters with a high rate of supernovae, we should lower the one of the "fraction of life/civilization" coefficients in the Drake equation.
Additionally, if the worlds with many nearby interstellar neighbors are the same worlds most likely to be affected by supernovae, then the fraction of civilizations that become star faring should be reduced as the civilizations that are safest from supernovae have the largest distances to overcome to become starting.
Monday, May 23, 2011
Drop it like it's hot (from the Moon)
Yet another discussion on space solar power on reddit pointed out the difficulties (both real and imagined) of transferring energy harvested in space back to Earth. Since this particular discussion centered around transferring energy from the Moon, I wondered if there might be some other way to transmit the energy from a body such as the moon.
What if we drop some sort of energy storage device from the moon onto Earth. Since Luna* is high up in Terra's gravity well and has a low escape velocity, doing so shouldn't be too difficult. So the question becomes, "What sort of energy storage device?"
*What are the preferable names between Luna and Moon and Terra and Earth?
Three possibilities came to mind: superconducting electromagnetic storage, fly wheel storage and thermal storage. All three benefit from something space has in abundance -- vacuum. Vacuum combined with low background temperature makes achieving the low temperatures necessary for superconducting storage a matter of the proper shading, at least during the trans-Terra flight. Flywheels don't lose energy due to drag and thermal storage benefits from the ease at which a vacuum thermos can be created.
Presumably the cost of transporting material to the moon would be fairly high, so I decided to limit my consideration to devices that could be constructed on the moon. I started with the flywheel. Aluminum is very common on the moon, so I came up with the maximum flywheel storage for a very strong Aluminum-Lithium alloy, Weldalite 048-T8. I came up with (and this is just the mass of the flywheel, not any of the parts needed to get it safely to Earth) an energy density of 273.1 kJ/kg. That's about 1/100th the energy density of coal, though extraction is more efficient, so call it about 1/60th the usable energy density of coal.
Waving aside all of the considerable other technical challenges, is the low energy density a show stopper for aluminum flywheel based interplanetary energy transfer?
Wikipedia gives world energy use as about 474 exajoules. That would require 1735 trillion kg of aluminum per year. While that doesn't push things in the impossible realm (Luna has a lot of aluminum), I'd say it's definitely infeasible. Particularly when you add the inefficiency of due to extracting the aluminum from regolith, at least 50 MJ/kg.
Well, maybe one of the other options is more workable. I hope to actually detail my (really simple) analysis of this, um, out of the box, idea and look at thermal storage in an upcoming post.
What if we drop some sort of energy storage device from the moon onto Earth. Since Luna* is high up in Terra's gravity well and has a low escape velocity, doing so shouldn't be too difficult. So the question becomes, "What sort of energy storage device?"
*What are the preferable names between Luna and Moon and Terra and Earth?
Three possibilities came to mind: superconducting electromagnetic storage, fly wheel storage and thermal storage. All three benefit from something space has in abundance -- vacuum. Vacuum combined with low background temperature makes achieving the low temperatures necessary for superconducting storage a matter of the proper shading, at least during the trans-Terra flight. Flywheels don't lose energy due to drag and thermal storage benefits from the ease at which a vacuum thermos can be created.
Presumably the cost of transporting material to the moon would be fairly high, so I decided to limit my consideration to devices that could be constructed on the moon. I started with the flywheel. Aluminum is very common on the moon, so I came up with the maximum flywheel storage for a very strong Aluminum-Lithium alloy, Weldalite 048-T8. I came up with (and this is just the mass of the flywheel, not any of the parts needed to get it safely to Earth) an energy density of 273.1 kJ/kg. That's about 1/100th the energy density of coal, though extraction is more efficient, so call it about 1/60th the usable energy density of coal.
Waving aside all of the considerable other technical challenges, is the low energy density a show stopper for aluminum flywheel based interplanetary energy transfer?
Wikipedia gives world energy use as about 474 exajoules. That would require 1735 trillion kg of aluminum per year. While that doesn't push things in the impossible realm (Luna has a lot of aluminum), I'd say it's definitely infeasible. Particularly when you add the inefficiency of due to extracting the aluminum from regolith, at least 50 MJ/kg.
Well, maybe one of the other options is more workable. I hope to actually detail my (really simple) analysis of this, um, out of the box, idea and look at thermal storage in an upcoming post.
Saturday, May 7, 2011
Welcoming Our Microkernel Inspired Robot Overlords
2011-05-08
"Welcoming Our Microkernel Inspired Robot Overlords"
Operating system kernels perform roles quite analogous to governments. They control the distribution of resources, set and enforce rules and control rogue processes. As such, I think the microkernel/monokernel debate has an analogy in governmental theory.
Would government be improved if the central government only managed smaller, more specialized governments? States' rights groups and other supporters of federalism would likely think so. Traditional American federalism isn't the only governmental architecture that would map onto microkernels and their helper daemons, though. A set of smaller, specialized legislative bodies with authority defined by area of expertise rather than geographical location would fit well as daemons, too. The microkernel could then be the voting populace.
Many of the arguments for microkernels map pretty well onto government as well. Limited legislatures (or other rule making bodies -- the system need not have a representative element) can be more rapidly shutdown in case of failure or bad behavior. They can also be hotswapped without bringing down the rest of the system, making it more robust and secure. Independently managed, empowered and funded specialist governments could potentially avoid the pain and inefficiency of a full government shutdown.
It seems the US government should be considered a hybrid system. The federal government and particularly the legislative and upper executive branches tending towards monolithic-ness and the state governments and executive departments bringing elements of micorkernel-ness.
The analogy isn't perfect, of course. I'm not sure how the very useful idea of checks and balances would play into the normally hiearchial OS models. Perhaps a distributed systems analogy could be useful. Still, it seems like OS architectural considerations like micro/monokernel discussions ought to be a fruitful framework for improving government design.
"Welcoming Our Microkernel Inspired Robot Overlords"
Operating system kernels perform roles quite analogous to governments. They control the distribution of resources, set and enforce rules and control rogue processes. As such, I think the microkernel/monokernel debate has an analogy in governmental theory.
Would government be improved if the central government only managed smaller, more specialized governments? States' rights groups and other supporters of federalism would likely think so. Traditional American federalism isn't the only governmental architecture that would map onto microkernels and their helper daemons, though. A set of smaller, specialized legislative bodies with authority defined by area of expertise rather than geographical location would fit well as daemons, too. The microkernel could then be the voting populace.
Many of the arguments for microkernels map pretty well onto government as well. Limited legislatures (or other rule making bodies -- the system need not have a representative element) can be more rapidly shutdown in case of failure or bad behavior. They can also be hotswapped without bringing down the rest of the system, making it more robust and secure. Independently managed, empowered and funded specialist governments could potentially avoid the pain and inefficiency of a full government shutdown.
It seems the US government should be considered a hybrid system. The federal government and particularly the legislative and upper executive branches tending towards monolithic-ness and the state governments and executive departments bringing elements of micorkernel-ness.
The analogy isn't perfect, of course. I'm not sure how the very useful idea of checks and balances would play into the normally hiearchial OS models. Perhaps a distributed systems analogy could be useful. Still, it seems like OS architectural considerations like micro/monokernel discussions ought to be a fruitful framework for improving government design.
Subscribe to:
Posts (Atom)