Science in a broad sense existed before the modern era, and in many historical civilizations, but modern science is so distinct in its approach and successful in its results
that it now defines what science is in the strictest sense of the term.
Much earlier than the modern era, another important turning point was
the development of the classical natural philosophy in the ancient Greek-speaking world.
The study of human things had been the realm of mythology and tradition, and Socrates was executed. Aristotle later created a less controversial systematic programme of Socratic philosophy, which was teleological, and human-centred. He rejected many of the conclusions of earlier scientists. For example in his physics the sun goes around the earth, and many things have it as part of their nature that they are for humans. Each thing has a formal cause and final cause and a role in the rational cosmic order. Motion and change is described as the actualization of potentials already in things, according to what types of things they are. While the Socratics insisted that philosophy should be used to consider the practical question of the best way to live for a human being, they did not argue for any other types of applied science.
Aristotle maintained the sharp distinction between science and the practical knowledge of artisans, treating theoretical speculation as the highest type of human activity, practical thinking about good living as something less lofty, and the knowledge of artisans as something only suitable for the lower classes. In contrast to modern science, Aristotle's influential emphasis was upon the "theoretical" steps of deducing universal rules from raw data, and did not treat the gathering of experience and raw data as part of science itself.[9]
In Northern Europe, the new technology of the printing press was widely used to publish many arguments including some that disagreed with church dogma. René Descartes and Francis Bacon published philosophical arguments in favor of a new type of non-Aristotelian science. Descartes argued that mathematics could be used in order to study nature, as Galileo had done, and Bacon emphasized the importance of experiment over contemplation. Bacon also argued that science should aim for the first time at practical inventions for the improvement of all human life.
Bacon questioned the Aristotelian concepts of formal cause and final cause, and promoted the idea that science should study the laws of "simple" natures, such as heat, rather than assuming that there is any specific nature, or "formal cause", of each complex type of thing. This new modern science began to see itself as describing "laws of nature". This updated approach to studies in nature was seen as mechanistic.
It is, during this period that the word science gradually became more commonly used to refer to the pursuit of a type of knowledge, and especially knowledge of nature — coming close in meaning to the old term "natural philosophy".
Both John Herschel and William Whewell systematised methodology: the latter coined the term scientist. When Charles Darwin published On the Origin of Species he established descent with modification as the prevailing evolutionary explanation of biological complexity. His theory of natural selection provided a natural explanation of how species originated, but this only gained wide acceptance a century later. John Dalton developed the idea of atoms. The laws of Thermodynamics and the electromagnetic theory
were also established in the 19th century, which raised new questions
which could not easily be answered using Newton's framework.
The belief that all observers share a common reality is known as realism. It can be contrasted with anti-realism, the belief that there is no valid concept of absolute truth such that things that are true for one observer are true for all observers. The most commonly defended form of anti-realism is idealism, the belief that the mind or spirit is the most basic essence, and that each mind generates its own reality.[15] In an idealistic world-view, what is true for one mind need not be true for other minds.
There are different schools of thought in philosophy of science. The most popular position is empiricism, which claims that knowledge is created by a process involving observation and that scientific theories are the result of generalizations from such observations.[16] Empiricism generally encompasses inductivism, a position that tries to explain the way general theories can be justified by the finite number of observations humans can make and the hence finite amount of empirical evidence available to confirm scientific theories. This is necessary because the number of predictions those theories make is infinite, which means that they cannot be known from the finite amount of evidence using deductive logic only. Many versions of empiricism exist, with the predominant ones being bayesianism[17] and the hypothetico-deductive method.[18]
Empiricism has stood in contrast to rationalism, the position originally associated with Descartes, which holds that knowledge is created by the human intellect, not by observation.[19] A significant twentieth century version of rationalism is critical rationalism, first defined by Austrian-British philosopher Karl Popper. Popper rejected the way that empiricism describes the connection between theory and observation. He claimed that theories are not generated by observation, but that observation is made in the light of theories and that the only way a theory can be affected by observation is when it comes in conflict with it.[20] Popper proposed falsifiability as the landmark of scientific theories, and falsification as the empirical method, to replace verifiability[21] and induction by purely deductive notions.[22] Popper further claimed that there is actually only one universal method, and that this method is not specific to science: The negative method of criticism, trial and error.[23] It covers all products of the human mind, including science, mathematics, philosophy, and art [24]
Another approach, instrumentalism, colloquially termed "shut up and calculate", emphasizes the utility of theories as instruments for explaining and predicting phenomena.[25] It claims that scientific theories are black boxes with only their input (initial conditions) and output (predictions) being relevant. Consequences, notions and logical structure of the theories are claimed to be something that should simply be ignored and that scientists shouldn't make a fuss about (see interpretations of quantum mechanics).
Finally, another approach often cited in debates of scientific skepticism against controversial movements like "scientific creationism", is methodological naturalism. Its main point is that a difference between natural and supernatural explanations should be made, and that science should be restricted methodologically to natural explanations.[26] That the restriction is merely methodological (rather than ontological) means that science should not consider supernatural explanations itself, but should not claim them to be wrong either. Instead, supernatural explanations should be left a matter of personal belief outside the scope of science. Methodological naturalism maintains that proper science requires strict adherence to empirical study and independent verification as a process for properly developing and evaluating explanations for observable phenomena.[27] The absence of these standards, arguments from authority, biased observational studies and other common fallacies are frequently cited by supporters of methodological naturalism as criteria for the dubious claims they criticize not to be true science.
Once a hypothesis has survived testing, it may become adopted into the framework of a scientific theory. This is a logically reasoned, self-consistent model or framework for describing the behavior of certain natural phenomena. A theory typically describes the behavior of much broader sets of phenomena than a hypothesis; commonly, a large number of hypotheses can be logically bound together by a single theory. Thus a theory is a hypothesis explaining various other hypotheses. In that vein, theories are formulated according to most of the same scientific principles as hypotheses.
While performing experiments, scientists may have a preference for one outcome over another, and so it is important to ensure that science as a whole can eliminate this bias.[33][34] This can be achieved by careful experimental design, transparency, and a thorough peer review process of the experimental results as well as any conclusions.[35][36] After the results of an experiment are announced or published, it is normal practice for independent researchers to double-check how the research was performed, and to follow up by performing similar experiments to determine how dependable the results might be.[37]
New scientific knowledge very rarely results in vast changes in our understanding. According to psychologist Keith Stanovich, it may be the media's overuse of words like "breakthrough" that leads the public to imagine that science is constantly proving everything it thought was true to be false.[39] While there are such famous cases as the theory of relativity that required a complete reconceptualization, these are extreme exceptions. Knowledge in science is gained by a gradual synthesis of information from different experiments, by various researchers, across different domains of science; it is more like a climb than a leap.[40] Theories vary in the extent to which they have been tested and verified, as well as their acceptance in the scientific community.[41] For example, heliocentric theory, the theory of evolution, and germ theory still bear the name "theory" even though, in practice, they are considered factual.[42]
Philosopher Barry Stroud adds that, although the best definition for "knowledge" is contested, being skeptical and entertaining the possibility that one is incorrect is compatible with being correct. Ironically then, the scientist adhering to proper scientific method will doubt themselves even once they possess the truth.[43] The fallibilist C. S. Peirce argued that inquiry is the struggle to resolve actual doubt and that merely quarrelsome, verbal, or hyperbolic doubt is fruitless[44]—but also that the inquirer should try to attain genuine doubt rather than resting uncritically on common sense.[45] He held that the successful sciences trust, not to any single chain of inference (no stronger than its weakest link), but to the cable of multiple and various arguments intimately connected.[46]
Stanovich also asserts that science avoids searching for a "magic bullet"; it avoids the single-cause fallacy. This means a scientist would not ask merely "What is the cause of...", but rather "What are the most significant causes of...". This is especially the case in the more macroscopic fields of science (e.g. psychology, cosmology).[47] Of course, research often analyzes few factors at once, but these are always added to the long list of factors that are most important to consider.[47] For example: knowing the details of only a person's genetics, or their history and upbringing, or the current situation may not explain a behaviour, but a deep understanding of all these variables combined can be very predictive.
Pre-philosophical
Science in its original sense is a word for a type of knowledge (Latin scientia, Ancient Greek epistemē), rather than a specialized word for the pursuit of such knowledge. In particular it is one of the types of knowledge which people can communicate to each other and share. For example, knowledge about the working of natural things was gathered long before recorded history and led to the development of complex abstract thinking, as shown by the construction of complex calendars, techniques for making poisonous plants edible, and buildings such as the pyramids. However no consistent distinction was made between knowledge of such things which are true in every community, and other types of communal knowledge such as mythologies and legal systems.Philosophical study of nature
See also: Nature (philosophy)
Before the invention or discovery of the concept of "nature" (Ancient Greek phusis), by the Pre-Socratic philosophers, the same words tend to be used to describe the natural "way" in which a plant grows,[6]
and the "way" in which, for example, one tribe worships a particular
god. For this reason it is claimed these men were the first philosophers
in the strict sense, and also the first people to clearly distinguish
"nature" and "convention".[7]
Science was therefore distinguished as the knowledge of nature, and the
things which are true for every community, and the name of the
specialized pursuit of such knowledge was philosophy — the realm of the
first philosopher-physicists. They were mainly speculators or theorists, particularly interested in astronomy. In contrast, trying to use knowledge of nature to imitate nature (artifice or technology, Greek technē) was seen by classical scientists as a more appropriate interest for lower class artisans.[8]Philosophical turn to human things
A major turning point in the history of early philosophical science was the controversial but successful attempt by Socrates to apply philosophy to the study of human things, including human nature, the nature of political communities, and human knowledge itself. He criticized the older type of study of physics as too purely speculative, and lacking in self-criticism. He was particularly concerned that some of the early physicists treated nature as if it could be assumed that it had no intelligent order, explaining things merely in terms of motion and matter.The study of human things had been the realm of mythology and tradition, and Socrates was executed. Aristotle later created a less controversial systematic programme of Socratic philosophy, which was teleological, and human-centred. He rejected many of the conclusions of earlier scientists. For example in his physics the sun goes around the earth, and many things have it as part of their nature that they are for humans. Each thing has a formal cause and final cause and a role in the rational cosmic order. Motion and change is described as the actualization of potentials already in things, according to what types of things they are. While the Socratics insisted that philosophy should be used to consider the practical question of the best way to live for a human being, they did not argue for any other types of applied science.
Aristotle maintained the sharp distinction between science and the practical knowledge of artisans, treating theoretical speculation as the highest type of human activity, practical thinking about good living as something less lofty, and the knowledge of artisans as something only suitable for the lower classes. In contrast to modern science, Aristotle's influential emphasis was upon the "theoretical" steps of deducing universal rules from raw data, and did not treat the gathering of experience and raw data as part of science itself.[9]
Medieval science
During late antiquity and the early Middle Ages, the Aristotelian approach to inquiries on natural phenomenon was used. Some ancient knowledge was lost, or in some cases kept in obscurity, during the fall of the Roman Empire and periodic political struggles. However, the general fields of science, or Natural Philosophy as it was called, and much of the general knowledge from the ancient world remained preserved though the works of the early encyclopedists like Isidore of Seville. During the early medieval period, Syrian Christians from Eastern Europe such as Nestorians and Monophysites were the ones that translated much of the important Greek science texts from Greek to Syriac and the later on they translated many of the works into Arabic and other languages under Islamic rule.[10] This was a major line of transmission for the development of Islamic science which provided much of the activity during the early medieval period. In the later medieval period, Europeans recovered some ancient knowledge by translations of texts and they built their work upon the knowledge of Aristotle, Ptolemy, Euclid, and others works. In Europe, men like Roger Bacon learned Arabic and Hebrew and argued for more experimental science. By the late Middle Ages, a synthesis of Catholicism and Aristotelianism known as Scholasticism was flourishing in Western Europe, which had become a new geographic center of science.Renaissance, and early modern science
Main article: Scientific revolution
By the late Middle Ages, especially in Italy there was an influx of texts and scholars from the collapsing Byzantine empire. Copernicus formulated a heliocentric model of the solar system unlike the geocentric model of Ptolemy's Almagest. All aspects of scholasticism were criticized in the 15th and 16th centuries; one author who was notoriously persecuted was Galileo,
who made innovative use of experiment and mathematics. However the
persecution began after Pope Urban VIII blessed Galileo to write about
the Copernican system. Galileo had used arguments from the Pope and put
them in the voice of the simpleton in the work "Dialogue Concerning the
Two Chief World Systems" which caused great offense to him.[12]In Northern Europe, the new technology of the printing press was widely used to publish many arguments including some that disagreed with church dogma. René Descartes and Francis Bacon published philosophical arguments in favor of a new type of non-Aristotelian science. Descartes argued that mathematics could be used in order to study nature, as Galileo had done, and Bacon emphasized the importance of experiment over contemplation. Bacon also argued that science should aim for the first time at practical inventions for the improvement of all human life.
Bacon questioned the Aristotelian concepts of formal cause and final cause, and promoted the idea that science should study the laws of "simple" natures, such as heat, rather than assuming that there is any specific nature, or "formal cause", of each complex type of thing. This new modern science began to see itself as describing "laws of nature". This updated approach to studies in nature was seen as mechanistic.
Age of Enlightenment
In the 17th and 18th centuries, the project of modernity, as had been promoted by Bacon and Descartes, led to rapid scientific advance and the successful development of a new type of natural science, mathematical, methodically experimental, and deliberately innovative. Newton and Leibniz succeeded in developing a new physics, now referred to as Newtonian physics, which could be confirmed by experiment and explained in mathematics. Leibniz also incorporated terms from Aristotelian physics, but now being used in a new non-teleological way, for example "energy" and "potential". But in the style of Bacon, he assumed that different types of things all work according to the same general laws of nature, with no special formal or final causes for each type of thing.It is, during this period that the word science gradually became more commonly used to refer to the pursuit of a type of knowledge, and especially knowledge of nature — coming close in meaning to the old term "natural philosophy".
19th century
20th century
Einstein's Theory of Relativity and the development of quantum mechanics led to the replacement of Newtonian physics with a new physics which contains two parts, that describe different types of events in nature. The extensive use of scientific innovation during the wars of this century, led to the space race and widespread public appreciation of the importance of modern science.Philosophy of science
Main article: Philosophy of science
Further information: Intersubjective verifiability and Subjunctive possibility
Working scientists usually take for granted a set of basic
assumptions that are needed to justify a scientific method: (1) that
there is an objective reality shared by all rational observers; (2) that
this objective reality is governed by natural laws; (3) that these laws
can be discovered by means of systematic observation and
experimentation. Philosophy of science seeks a deep understanding of
what these underlying assumptions mean and whether they are valid. Most
contributions to the philosophy of science have come from philosophers,
who frequently view the beliefs of most scientists as superficial or
naive—thus there is often a degree of antagonism between working
scientists and philosophers of science.The belief that all observers share a common reality is known as realism. It can be contrasted with anti-realism, the belief that there is no valid concept of absolute truth such that things that are true for one observer are true for all observers. The most commonly defended form of anti-realism is idealism, the belief that the mind or spirit is the most basic essence, and that each mind generates its own reality.[15] In an idealistic world-view, what is true for one mind need not be true for other minds.
There are different schools of thought in philosophy of science. The most popular position is empiricism, which claims that knowledge is created by a process involving observation and that scientific theories are the result of generalizations from such observations.[16] Empiricism generally encompasses inductivism, a position that tries to explain the way general theories can be justified by the finite number of observations humans can make and the hence finite amount of empirical evidence available to confirm scientific theories. This is necessary because the number of predictions those theories make is infinite, which means that they cannot be known from the finite amount of evidence using deductive logic only. Many versions of empiricism exist, with the predominant ones being bayesianism[17] and the hypothetico-deductive method.[18]
Empiricism has stood in contrast to rationalism, the position originally associated with Descartes, which holds that knowledge is created by the human intellect, not by observation.[19] A significant twentieth century version of rationalism is critical rationalism, first defined by Austrian-British philosopher Karl Popper. Popper rejected the way that empiricism describes the connection between theory and observation. He claimed that theories are not generated by observation, but that observation is made in the light of theories and that the only way a theory can be affected by observation is when it comes in conflict with it.[20] Popper proposed falsifiability as the landmark of scientific theories, and falsification as the empirical method, to replace verifiability[21] and induction by purely deductive notions.[22] Popper further claimed that there is actually only one universal method, and that this method is not specific to science: The negative method of criticism, trial and error.[23] It covers all products of the human mind, including science, mathematics, philosophy, and art [24]
Another approach, instrumentalism, colloquially termed "shut up and calculate", emphasizes the utility of theories as instruments for explaining and predicting phenomena.[25] It claims that scientific theories are black boxes with only their input (initial conditions) and output (predictions) being relevant. Consequences, notions and logical structure of the theories are claimed to be something that should simply be ignored and that scientists shouldn't make a fuss about (see interpretations of quantum mechanics).
Finally, another approach often cited in debates of scientific skepticism against controversial movements like "scientific creationism", is methodological naturalism. Its main point is that a difference between natural and supernatural explanations should be made, and that science should be restricted methodologically to natural explanations.[26] That the restriction is merely methodological (rather than ontological) means that science should not consider supernatural explanations itself, but should not claim them to be wrong either. Instead, supernatural explanations should be left a matter of personal belief outside the scope of science. Methodological naturalism maintains that proper science requires strict adherence to empirical study and independent verification as a process for properly developing and evaluating explanations for observable phenomena.[27] The absence of these standards, arguments from authority, biased observational studies and other common fallacies are frequently cited by supporters of methodological naturalism as criteria for the dubious claims they criticize not to be true science.
Basic and applied research
Although some scientific research is applied research into specific problems, a great deal of our understanding comes from the curiosity-driven undertaking of basic research. This leads to options for technological advance that were not planned or sometimes even imaginable. This point was made by Michael Faraday when, allegedly in response to the question "what is the use of basic research?" he responded "Sir, what is the use of a new-born child?".[28] For example, research into the effects of red light on the human eye's rod cells did not seem to have any practical purpose; eventually, the discovery that our night vision is not troubled by red light would lead search and rescue teams (among others) to adopt red light in the cockpits of jets and helicopters.[29] In a nutshell: Basic research is the search for knowledge. Applied research is the search for solutions to practical problems using this knowledge. Finally, even basic research can take unexpected turns, and there is some sense in which the scientific method is built to harness luck.Experimentation and hypothesizing
Based on observations of a phenomenon, scientists may generate a model. This is an attempt to describe or depict the phenomenon in terms of a logical, physical or mathematical representation. As empirical evidence is gathered, scientists can suggest a hypothesis to explain the phenomenon.[30] Hypotheses may be formulated using principles such as parsimony (also known as "Occam's Razor") and are generally expected to seek consilience—fitting well with other accepted facts related to the phenomena.[31] This new explanation is used to make falsifiable predictions that are testable by experiment or observation. When a hypothesis proves unsatisfactory, it is either modified or discarded.[32] Experimentation is especially important in science to help establish causational relationships (to avoid the correlation fallacy). Operationalization also plays an important role in coordinating research in/across different fields.Once a hypothesis has survived testing, it may become adopted into the framework of a scientific theory. This is a logically reasoned, self-consistent model or framework for describing the behavior of certain natural phenomena. A theory typically describes the behavior of much broader sets of phenomena than a hypothesis; commonly, a large number of hypotheses can be logically bound together by a single theory. Thus a theory is a hypothesis explaining various other hypotheses. In that vein, theories are formulated according to most of the same scientific principles as hypotheses.
While performing experiments, scientists may have a preference for one outcome over another, and so it is important to ensure that science as a whole can eliminate this bias.[33][34] This can be achieved by careful experimental design, transparency, and a thorough peer review process of the experimental results as well as any conclusions.[35][36] After the results of an experiment are announced or published, it is normal practice for independent researchers to double-check how the research was performed, and to follow up by performing similar experiments to determine how dependable the results might be.[37]
Certainty and science
A scientific theory is empirical, and is always open to falsification if new evidence is presented. That is, no theory is ever considered strictly certain as science accepts the concept of fallibilism. The philosopher of science Karl Popper sharply distinguishes truth from certainty. He writes that scientific knowledge "consists in the search for truth", but it "is not the search for certainty ... All human knowledge is fallible and therefore uncertain."[38]New scientific knowledge very rarely results in vast changes in our understanding. According to psychologist Keith Stanovich, it may be the media's overuse of words like "breakthrough" that leads the public to imagine that science is constantly proving everything it thought was true to be false.[39] While there are such famous cases as the theory of relativity that required a complete reconceptualization, these are extreme exceptions. Knowledge in science is gained by a gradual synthesis of information from different experiments, by various researchers, across different domains of science; it is more like a climb than a leap.[40] Theories vary in the extent to which they have been tested and verified, as well as their acceptance in the scientific community.[41] For example, heliocentric theory, the theory of evolution, and germ theory still bear the name "theory" even though, in practice, they are considered factual.[42]
Philosopher Barry Stroud adds that, although the best definition for "knowledge" is contested, being skeptical and entertaining the possibility that one is incorrect is compatible with being correct. Ironically then, the scientist adhering to proper scientific method will doubt themselves even once they possess the truth.[43] The fallibilist C. S. Peirce argued that inquiry is the struggle to resolve actual doubt and that merely quarrelsome, verbal, or hyperbolic doubt is fruitless[44]—but also that the inquirer should try to attain genuine doubt rather than resting uncritically on common sense.[45] He held that the successful sciences trust, not to any single chain of inference (no stronger than its weakest link), but to the cable of multiple and various arguments intimately connected.[46]
Stanovich also asserts that science avoids searching for a "magic bullet"; it avoids the single-cause fallacy. This means a scientist would not ask merely "What is the cause of...", but rather "What are the most significant causes of...". This is especially the case in the more macroscopic fields of science (e.g. psychology, cosmology).[47] Of course, research often analyzes few factors at once, but these are always added to the long list of factors that are most important to consider.[47] For example: knowing the details of only a person's genetics, or their history and upbringing, or the current situation may not explain a behaviour, but a deep understanding of all these variables combined can be very predictive.
0 comments:
Post a Comment