NASA’s Parker Solar Probe Discovers Natural Radio Emission in Venus’ Atmosphere

During a brief swing by Venus, NASA’s Parker Solar Probe detected a natural radio signal that revealed the spacecraft had flown through the planet’s upper atmosphere. This was the first direct measurement of the Venusian atmosphere in nearly 30 years — and it looks quite different from Venus’ past. A study published today in Geophysical Research Letters confirms that Venus’ upper atmosphere undergoes puzzling changes over a solar cycle, the Sun’s 11-year activity cycle. This marks the latest clue to untangling how and why Venus and Earth are so different. The data sonification in the video translates data from Parker Solar Probe’s FIELDS instrument into sound. FIELDS detected a natural, low-frequency radio emission as it moved through Venus’ atmosphere that helped scientists calculate the thickness of the planet’s electrically charged upper atmosphere, called the ionosphere. Understanding how Venus’ ionosphere changes will help researchers determine how Venus, once so similar to Earth, became the world of scorching, toxic air it is today.

NASA Celebrates Asian American and Pacific Islander (AAPI) Heritage Month 2021

Each May, NASA commemorates Asian American and Pacific Islander (AAPI) Heritage Month to recognize the significant contributions of past and present employees of AAPI descent. Each of them embody the enduring and resilient spirit this community brings to advancing science, research, and discovery. Hear their stories. Featured in the video: Anthony Arviola – Langley Research Center Han Woong (Brian) Bae – Marshall Space Flight Center Kelly Busquets – Goddard Space Flight Center Sarat Calmur – Langley Research Center Gemma Flores – NASA Headquarters Wensheng Huang – Glenn Research Center Miki Kenji – Glenn Research Center Alex Lin – Langley Research Center Rita Melvin – Goddard Space Flight Center Kartik Sheth – NASA Headquarters Steve Shih – NASA Headquarters Emilie Siochi – Langley Research Center Jenny Staggs – Armstrong Flight Research Center Githika Tondapu – Marshall Space Flight Center Sara Tsui – Kennedy Space Center Jennifer Turner – Johnson Space Center Video Credit: NASA 360 – Jessica Wilde, David Shelton, and Scott Bednar

NASA Tests System for Precise Aerial Positioning in Supersonic Flight

NASA recently flight tested a visual navigation system called the Airborne Location Integrating Geospatial Navigation System (ALIGNS). The system is designed to enhance precise aerial positioning between two aircraft in supersonic flight, and is being used by NASA to prepare for future acoustic validation flights of the agency’s X-59 Quiet SuperSonic Technology airplane.

Science Launching on SpaceX’s 22nd Cargo Resupply Mission to the Space Station

The 22nd SpaceX cargo resupply mission carrying scientific research and technology demonstrations launches to the International Space Station from NASA’s Kennedy Space Center in Florida no earlier than June 3. Experiments aboard include studying how water bears tolerate space, whether microgravity affects symbiotic relationships, analyzing the formation of kidney stones, and more. 

NASA’s Neutral Buoyancy Laboratory 360 Tour

The Neutral Buoyancy Lab at the Sonny Carter Training Facility is a large swimming pool where astronauts can simulate an environment similar to the microgravity environment – they don’t sink or float. Astronauts use it to prepare for spacewalks and train in their spacesuits for upcoming missions. Take a look around!

NASA’s New DAVINCI+ Mission to Venus

NASA has selected the DAVINCI+ (Deep Atmosphere Venus Investigation of Noble-gases, Chemistry and Imaging +) mission as part of its Discovery program, and it will be the first spacecraft to enter the Venus atmosphere since NASA’s Pioneer Venus in 1978 and USSR’s Vega in 1985. Named for visionary Renaissance artist and scientist, Leonardo da Vinci, the DAVINCI+ mission will bring 21st-century technologies to the world next door. DAVINCI+ may reveal whether Earth’s sister planet looked more like Earth’s twin planet in a distant, possibly hospitable past with oceans and continents. The mission combines a spacecraft, developed by Lockheed-Martin, and a descent probe, developed at NASA’s Goddard Space Flight Center. The spacecraft will map the cloud motions and surface composition of mountainous regions, including the Australia-sized Ishtar Terra. The descent probe will take a daring hour-long plunge through the massive and largely unexplored atmosphere to the surface, making detailed measurements of the atmosphere and surface the whole way down. These measurements include atmospheric samples and images that will allow scientists to deduce the planet’s history, its possible watery past, and trace gases as fingerprints of the planet’s inner workings. The probe will descend over Alpha Regio, an intriguing highland terrain known as a “tessera” standing nearly 10,000 feet tall above the surrounding plains, which might be a remnant of an ancient continent. All of these measurements will help connect Earth’s next door neighbor to similar planets orbiting other stars that may be observed with the James Webb Space Telescope. The DAVINCI+ team spans NASA centers (Goddard Space Flight Center, Jet Propulsion Laboratory, Langley Research Center, Ames Research Center), aerospace partners (Lockheed Martin), and Universities (University of Michigan) to deliver ground-breaking science during the late 2020’s and early 2030’s with a launch in 2029, flybys of Venus in 2030, probe-based measurements in June 2031. The information sent back to Earth will rewrite the textbooks and inspire the next generation of planetary scientists. The NASA Goddard led team includes Principal Investigator Jim Garvin and Deputy Principal Investigators Stephanie Getty and Giada Arney, as well as Project Manager Ken Schwer, lead Systems Engineer Michael Sekerak, and many others at Goddard, Lockheed Martin, and at other institutions. The team is excited to return NASA to Venus to address our sister planet’s long-standing mysteries! Music: “Haymaker” – Jordan Rudess & Joseph Stevenson, via Universal Production Music Video credit: NASA’s Goddard Space Flight Center Produced & Edited by: David Ladd (AIMM) Narrated by: Jerome Hruska Animations by: NASA’s Conceptual Image Lab Walt Feimer (KBRwyle) – Animation Manager/Animator Michael Lentz (KBRwyle) – Art Director/Animator Krystofer Kim (KBRwyle) – Animator Jonathan North (KBRwyle) 

Mystery of Galaxy’s Missing Dark Matter Deepens

When astronomers using NASA’s Hubble Space Telescope uncovered an oddball galaxy that looks like it doesn’t have much dark matter, some thought the finding was hard to believe and looked for a simpler explanation. Dark matter, after all, is the invisible glue that makes up the bulk of the universe’s contents. All galaxies are dominated by it; in fact, galaxies are thought to form inside immense halos of dark matter. So, finding a galaxy lacking the invisible stuff is an extraordinary claim that challenges conventional wisdom. It would have the potential to upset theories of galaxy formation and evolution.

Credit: NASA’s Goddard Space Flight Center Paul Morris: Lead Producer Andrea Gianopoulos: Science Writer Tracy Vogel: Science Writer Additional Visualizations: Galaxy Motion Simulation: Credit: ESO/L. Calçada. Dark Matter Simulation: Credit: Wu, Hahn, Wechsler, Abel(KIPAC), Visualization: Kaehler (KIPAC) Music Credits: “Aphelion Horizon” by Alistair Hetherington [PRS] via Atmosphere Music Ltd. [PRS], and Universal Production Music.

Let’s talk about uncertainty and its related concept error.

You might think that science has got nothing to do with uncertainty or error. Science should be about knowing things, and being sure of things. And indeed, scientists are confident about many of the theories they have, and are very accurate in many of the observations they make. But scientists are always wanting to be aware of the limitations of their theories, and the limitations of their data. So, good science depends on understanding uncertainty, and error.

These concepts are distinguishable. Uncertainty, involves inevitably. Uncertainty involved with the measuring apparatus. The theoretical framework for an observation, or the physical Universe, itself. An error is something a little different. Usually, when scientists talk about an error, they’re not talking about mistakes, as we might do in every day life. They’re talking about errors associated with measurements, or observations that are to do with limitations of the measuring apparatus, or the telescope, say. So we can distinguish these two things. They’re built into science, and they’re completely unavoidable. Science is never perfect, and science is never absolutely confident 100% of its conclusions.

This level of contingency is unavoidable, and it’s not a problem. It just means, we move towards greater certainty. And we must always pay attention to the errors. Astronomy is in a particular situation within the fields of science because the astronomy does not work, as a laboratory with direct control of the experiment. Experiments of astronomy take place trillions of miles away at the minimum, that’s one light year. So astronomy is based on remote situations, and remote sensing of those situations. As a result the uncertainties in astronomy are perhaps larger than in other scientific fields, or laboratories involved. In some situations in astronomy, we are quite unsure, especially in a new regime. We talk about an order of magnitude uncertainty, which means a factor of 10. The answer could be 10 times smaller, or 10 times larger than our first guess. This seems almost like we don’t know anything at all. But in a new regime, and in a universe that spans dozens of orders of magnitude of scale, sometimes this is where we start. As astronomers refine their measurements. In many cases, they’re quite happy with a factor of 2, or 50% uncertainty. For an observation such, as the number of stars in a galaxy, or the size of a star, this is a reasonable estimate. It can tell us quite a lot. It’s hard to do much better. And in many fields of astronomy, something with 10% precision or accuracy is as about as good, as we can get. Remember we’re dealing with limited amounts of information from objects that are very faint or very far away. And so 10% or 5% accuracy, such as for the expansion rate of the Universe. Involves an enormous amount of work, even to reach this level. These are the realms of accuracy in astronomy, and we are gradually moving towards precision. There are some measurements in astrophysics where the accuracy is 1% or two, the age of the earth for example is known for a precision of less than 1%. We can talk in general about three different forms of uncertainty, or limitations that occur in science. In all fields of science, not just astronomy. The most important and perhaps, the trickiest to diagnose is a conceptual limitation of science. For example, we may make a false premise about a theory ,or an observation. We might confuse causation

with correlation, something we’ll talk about more, or our powerful pattern recognition apparatus might lead us to infer a pattern where none actually exists.

Remember, we’re conditioned to do this. If we were hunter gatherers, and we thought we saw a pattern of a leopard hiding in the underbrush, and we were wrong then we just got a fried. If we didn’t see the leopard, when the leopard is there, we were lunch. So we’re built to recognize patterns very powerfully, and sometimes in science this can lead us astray. The second level of limitations in science are macroscopic, associated with our observations, our measurement apparatus. There is no perfect set of data in science, simply doesn’t exist. Every measurement has limitations, every data set in finite. And so, there is uncertainty, associated with those limitations. We can do our best to improve the observations, to improve the instrument, the apparatus, make a bigger telescope a more accurate detector. But we can never completely, overcome those limitations. The third level of limitations in science are microscopic. And these are profound because they’re associated with quantum uncertainty in the microscopic world. About 100 years ago, innovations in physics led to an awareness that there was a fundamental imprecision with, which we can measure the physical world of subatomic particles. This limitation has nothing to do with our measuring apparatus. No amount of ingenuity, or money can overcome this limitation. It’s built into the quantum nature of reality, and so it forms a profound limitation on our knowledge, of the microscopic world. But when we’re dealing with large macroscopic objects of humans and, and objects that contain trillions, and trillions of atoms, these uncertainties become insignificant. The quantum uncertainty in particular is a very strange beast, and physicists continue to debate 100 years after these theories were invented what their philosophical implication is. Einstein, for example, was extremely uncomfortable with these limitations in nature, these absolute obstacles to our understanding. And he thought it was just a matter of ingenuity before we developed a deeper theory that could explain everything. At the moment we think Einstein was wrong on this matter, and there is no perfect theory of nature at the subatomic level.

It upset Einstein very much you know? All that damned quantum jumping. It spoiled his idea of God, which I tell you frankly is the only idea of Einstein’s I never understood. He believed in the same God, as Newton. Causality, nothing without a reason, but now one thing led to another until causality was dead. Quantum mechanics made everything finally, random. A thing could be this way, or that way. The mathematics denies certainty, they reveal only probability, and chance. And Einstein couldn’t believe in a God, who threw dice. Oh, you should have come to me, I would have told him. Listen now, but he threw you. Look around. He never stopped. Some principles in science are so foundational that we rarely think about them or question them. Even working scientists are rarely questioning these ideas. One of the most important for astronomy is causality.

The idea that effects have causes. That nothing happens for no reason. This sounds simple, but its very important. We have to operate, as if causality applies. If causality doesn’t apply, then effects could happen before their causes, which clearly makes no logical sense. But this is a foundational principle because in every situation in astrophysics that we’ve encountered, when we observe a phenomenon, we imagine that there is a cause of that phenomenon. That nothing occurs without a reason. And so far, that’s been validated. Determinism is the idea in philosophy for this concept. And it applies in everyday life too. Let me tell you a story. Imagine you get up in the morning, and your car doesn’t start. You take your car, there’s nothing wrong. The key isn’t broken, there’s no wires under the dash that are disconnected, the lights haven’t been left on all night, the battery seems fine. But your car just won’t start. It’s frustrating, but you get your friend out of bed, your roommate. He knows about cars more than you do so you get him to look at the car. Now he does things that you can’t do, he goes under the hood, checks more wiring, looks at your key, looks at the ignition. And after working for an hour, gets exasperated and says, I can’t see anything wrong. Your car just won’t start. You’re not very happy with this explanation, so you call the dealer. And because your car’s still under warranty they show up in an hour with a fancy van and a lot of high tech equipment. The dealer, wearing his fancy uniform with a logo spends at least an hour on your car, and he’s testing this and he’s testing that. And at the end of his time with his hands dirty and some frustration on his face, too, he says. I’m sorry, I have tested everything. I can’t find anything wrong. Your car just won’t start. What I’m asking you is to imagine

you are in this situation. How would you react to this information? I think, you, like most people, like me, would say that’s unacceptable, that’s wrong. There’s obviously something wrong with your car and this person is just not smart enough to find it. It’s unacceptable for us to think that situations arise for no cause or for no reason. So we all in fact, operate as determinists in our everyday life. And science uses this as a foundational principle. In the realm of philosophy however, determinism has a dark side. Determinism means that every effect has a cause. And the logic of this means that things are predictable, because a cause has an effect, and so on. Newtonian gravity and the mechanics of everyday objects that Newton developed into mathematical theories were presumed to be able to predict the behavior of everything in the universe. So mechanical or mathematical determinism is an idea that came about in the 17th century. The idea that we live in a clockwork universe. Where if we could calculate the theory of gravity, or the theories of atoms accurately enough, we could predict with perfect precision the behaviors of those atoms, or the universe as a whole. Now we’ve never shown any sign that we can do this, but at the time philosophers rebelled strongly against Newton’s theories. They considered that this might rob us of free will because if the universe is completely deterministic, everything is preordained and there is no free choice and there is no free will. This profound philosophical debate has not entirely disappeared. But is dissipated, of course,

when we realized that the quantum world has uncertainty built in on the ground floor. For a more routine, everyday example of this, let’s look at the distinction between causation and correlation. We think that things happen for a reason and it’s the job of science to find out that reason and make a physical explanation. But we start by dealing with data, and data is not always obvious when it speaks to us. Let’s look at a graph where we’re plotting data of two quantities that we can measure. It doesn’t even really matter what these quantities are. And we put them in a scatter plot on a graph. As we make more observations, each observation becomes a point or a data point on this graph and we’re looking for a pattern. We might have a pair of observations where it’s a scatter plot. It’s thrown down at random, there seems to be no rhyme or reason, no pattern to this set of dots. Or, perhaps as we gather more data it seems that they follow a straight line. There’s a slope we can fit through this straight line and we would say they these quantities are correlated. That they seem to be mathematically related if it’s a straight line it’s a linear relationship. But, perhaps we gather more data. And the graph suddenly changes to the point where we can no longer fit a straight line through the points. At this point, we have to reject that model, a linear relationship between the two quantities in favor of a different model, a non linear relationship, a different form of mathematics that relates them, and presumably, a different physical theory, underlying that mathematical relationship. But notice how difficult and subtle this process is. How do we decide in the absence of a final answer, how much is enough data to test for a correlation? And the distinction between correlation and no correlation at all?

How do we decide we’ve explored enough range of the parameters. To encompass the possibility that the relationship might not be the simplest one we can fit through the data. When you fit more complex relationships or curves through data points you have more variables. What scientists will call degrees of freedom. And there are more different models you can fit to your data. Making you less certain that any one of those models is correct. So these are all issues scientists have to deal with when they make observations of the world and explore their data, which is how they do it in graphs like this. So I’ve talked about the formal aspects of correlation and how we measure observations and try and look for relationships between these quantities or parameters or variables. But what about the underlying issue that we’re trying to explain? This is the confusion between causation and correlation. It’s a profound issue in science and has actually led to some famous mistakes in the history of science. Because correlation does not in and of itself imply causation, i.e., cause and effect, for which there is a physical theory to explain it. As an almost trivial example, consider this relationship in observations made around the world between

the average temperature and the number of pirates in that region of the world. It’s a very strange kind of correlation, but actually quite a good correlation. The higher the temperature, the more the number of pirates. Does heat make pirates?

Turn normal people into pirates? Or do there happen to be more pirate activities in hot parts of the world? The correlation, of course, tells you nothing about what’s going on; what underlies this relationship. Even though the relationship is present in the data. So the issue of inferring causation from correlation is a big step in science. And it’s a step that can’t be taken lightly, and must be taken carefully. Because getting this wrong, of course, is a profound error. Another example comes from the history of philosophy. A famous story told by Bertrand Russell, one of the greatest philosophers of the 20th century. He talked about a chicken growing up in a farm. Every morning, the sun would rise and the farmer would come out and scatter seed and the chicken would eat. The chicken, being of small brain, naturally associated or correlated the rising sun with being fed and eating, a good outcome. And so over time, as days went by, and weeks, and months. The chicken made a quite plausible connection between this correlation and a causation. The rising sun was obviously making the farmer come out and feed him. But one morning, however, the sun rises, and the farmer comes out and wrings the chicken’s neck for the dinner table, that night. The chicken has suffered a disastrous, a catastrophic failure. Of logic, because has confused correlation with causation. Another example comes from the era of Margaret Mead in anthropology in the 1930s in the South Pacific. Margaret Mead observed that there were South, some South Pacific islanders that actually put head lice into the, into their children’s hair. This seemed like a very strange behavior, one she couldn’t explain. It made no sense. Until she realized that, when children have fever, their heads become hot. And the temperature range that lice will accommodate is fairly narrow. So when a child has a fever and is sick, they never have head lice. And so the islanders in this case were using head lice to try and ward off a fever. A very strange behavior, but a confusion of correlation and causation. As an example of unavoidable uncertainty,  we should recognize that probability and chance play a role in science, in observation and in data. A macroscopic example of that might involve the spinning of a coin sequentially, or the role of a dice. The probability of a coin toss is always 50%, assuming the coin is unaltered. And that is true no matter how many times the coin has been tossed. Understanding how these probabilities or uncertainties relate to each other is an important part of science. It’s a common fallacy in the everyday world which lets places like Las Vegas make a lot of money that if, for example, you’ve had a series of rolls of the dice without a six. Maybe a large number of them, a six is somehow more probable. A six is always a one in six chance of a roll of the dice. Just as, a coin toss that leads to tens heads in a row, still has a one in two chance of producing the next toss to be a heads. So what we see is that the behavior of these probabilistic situations becomes extremely predictable with large number of events but is unpredictable with one event. Indeed, tossing a coin thousands of times or rolling a die thousands of times produces an extremely reliable, within a narrow well-determined range outcome of one and two or one and six. So when scientists are dealing with individually uncertain events, they often gather lots of information to hone in on an average behavior which establishes that probability with increasingly smaller bounds. The microscopic situation dealing with the last layer of uncertainty involves, for example, radioactive decay. The half life of a radioactive element is the well-determined time in a physics experiment within which half of the atoms in any sample will decay into a different by-product, usually a lower mass element.

That’s a well-determined number, accurate to several decimal places and precision. But if we look at any individual atom in that sample, we can say with no certainty at all when it will decay. It’s completely unpredictable. Macroscopic example of this, a more trivial one, would be popcorn. You can take a set of popcorn kernels, and predict by a series of observations and different experiments with quite good reliability how long it will take half of those kernels to pop. But if you take any particular kernel, it’s time to popping is extremely uncertain, almost indeterminate. So scientists routinely work in this situation where there is either a quantum level and fundamental uncertainty or an operational probability attached to an event. And they overcome these uncertainties or imprecisions by gathering more and more of data. So the principle drawn from this is that scientists are always data hungry, always wanting to make more and more observations to refine the inference they can draw from those observations. Science isn’t perfect, and can never be perfect.

There is no such thing as perfect data and there’s no such thing as absolutely certain conclusions to be drawn from a scientific theory. The limitations of science occur at different levels. One level is conceptual, where we might confuse correlation with causation. Or we might not justify the premises that underlie a theory. The second level is operational based on our measurements. There’s no such thing as a perfect measurement. Scientists always want more and more measurements because there are errors or uncertainties attached. When scientists talk about an error, they’re not meaning a simple mistake. They’re talking about a limitation of the observation based on the measurement apparatus itself. This is true weather it’s in the lab or using a telescope. The final level of uncertainty, is a floor of uncertainty applied by the quantum theory as it applies to individual atoms. In most aspects of astronomy, we’re dealing with such large objects and such large number of atoms that these quantum uncertainties don’t apply.

The reasoning is a very important part of the scientific method.

Reasoning involves how we use observations in the natural world and combine them to form structures or theories that explain a large range of phenomenon. Humans are unusual in their powers of reasoning. Perhaps we are not unique. I’m confident there are other sentient animals with fairly high degrees of intelligence. But humans have taken reasoning to an entirely new level. Notice that although science is only a few thousand years old, and civilization only 10,000 years old, humans have had the power of reasoning for longer than that. The last detectable anatomical changes in humans were in their brain chemistry date back about 40,000 years when we were hunter-gatherers and nomads in Africa, even before we ventured into the other continents. A human alive then has exactly the same capability that we do now. And that they lived in a primitive and simple world with no tools of technology or tools of science. What did they imagine about the world using the same brain they had then that we have now? 

The formal ideas of reasoning started with the Greek philosophers and Aristotle, a profound influential philosopher who affected physics, astronomy, mathematics and many other fields. Aristotle developed the rules of deductive logic that still hold today. They were further codified by

Bertrand Russell in the 20th century. Aristotle’s rules of deductive logic are how we define science as it combines statements of the natural world to draw conclusions and inferences. We can look at examples where deductive reasoning fails, and it’s important to look at these examples and see if you can see why they fail. Sometimes deductive reasoning fails dramatically and in an obvious way, and sometimes the failure is a little subtle. Often when we miss the failure of logic, it’s because we don’t question the premises. 

Logic combines statements of the natural world, or observations, or

theories to draw a conclusion. But if those premises or assumptions are faulty, or not justified by data or observation, then the combination in logic fails. Logic is just a tool. It can’t define the veracity of the statements that goes into it. There are two fundamentally different kinds of logic that apply in science, in any field of science, deductive and inductive logic. 

The deduction is the theory put together by Aristotle and burnished over the centuries since then. An example of deductive logic involves arithmetic. The statement, 2 plus 2 equals 4 is completely and self-consistently true. It doesn’t matter on your opinion, your point of view, whether there’s a y in the month, it’s always true. And in that way, deductive logic always produces reliable conclusions, if the premises are valid. A simple example in astronomy would be, this observation say demonstrates that the earth is  larger than the moon. A second observation demonstrates that the sun is larger than the earth. We can deductively combine these two statements to say that the sun is larger than the moon. In this example, you can of course see the power and the limitation of deductive logic. It’s a very reliable conclusion, but in a sense, you’re not getting out more than what’s there in the first place in the two separate statements. So deductive reasoning alone cannot guide and drive science. The second form of scientific reason is called induction. And induction was the most powerful and first used by Isaac Newton in this area of gravity. Induction in simple terms is generalizing to a broad theory from a specific or limited set of observations. This generalization is something that happens in science all the time because when we develop theories, we never could have tested them in all possible situations. So we’re making an inference. We’re projecting our conclusion into a broader set of situations and that’s how we test the theory. In Newton’s case, he developed a universal law of gravity to explain the orbits of objects in the solar system. And at that time, it was just the sun, the earth, the moon, and the planets observable with the naked eye. A very limited set of objects. His theory of gravity explained their orbits extremely well, but he was confident enough in the power of the theory to project and imagine that it would apply to as yet unobserved situations. A great example was Halley’s Comet. Before he died, Newton made a prediction of when Halley’s comet would reappear. It was named for his friend Halley when it did reappear, as predicted by his theory. More profoundly, Newton made his observations and did his theory in a time when we only knew of one set of stars, approximate stars of the Milky Way. But Newton’s Universal Law of Gravity turns out to explain the motions of stars in other galaxies and the motion of other galaxies in a universe with a 100 billion galaxies. These as yet unobserved phenomena were perfectly explained by his theory. A dramatic example of induction or generalization from a very limited set of tests of his theory during his lifetime. An example from another field of science might be the Darwinian theory of natural selection and evolution. Darwin, of course, only was able to make a limited set of observations over a few decades of life forms and how they evolved in response to natural selection in the environment. But the mechanisms he proposed in his book of 1859 were imputed to imply to all species and all forms of evolution overtime in the history of the earth. And they’ve proven to be valid for those larger situations, another example of induction at work. Science is based on reasoning, which we think is a particularly human attribute that we have above all the other animals. 

This attribute has been present in our brains for 10,000 years, even though science is a lot younger. The formal application of reasoning comes in logic, and in particular, in two forms of reasoning, deduction, where we deduce principles from a specific set of observations combining them to draw a reliable conclusion. And induction, which involves generalization from a finite set of observations to a much larger set of potential situations. Both forms of reasoning are used in all scientific fields to gain new knowledge.