THESE ARE SOME OF MY NOTES ON ORGANIC CHEMISTRY IF YOU NEED ANY HELP
Structure and Nomenclature of Hydrocarbons
| What Is an Organic Compound? | The Saturated Hydrocarbons, or Alkanes | The Cycloalkanes |
| Rotation Around C |
The Nomenclature of Alkanes | The Unsaturated Hydrocarbons: Alkenes and Alkynes |
When you drive up to the pump at some gas stations you are faced with a variety of choices.
You can buy "leaded" gas or different forms of "unleaded" gas that have different octane numbers. As you filled the tank, you might wonder, "What is 'leaded' gas, and why do they add lead to gas?" Or, "What would I get for my money if I bought premium gas, with a higher octane number?"
You then stop to buy drugs for a sore back that has been bothering you since you helped a friend move into a new apartment. Once again, you are faced with choices (see the figure below). You could buy aspirin, which has been used for almost a hundred years. Or Tylenol, which contains acetaminophen. Or a more modern pain-killer, such as ibuprofen. While you are deciding which drug to buy, you might wonder, "What is the difference between these drugs?," and even, "How do they work?"

You then drive to campus, where you sit in a "plastic" chair to eat a sandwich that has been wrapped in "plastic," without worrying about why one of these plastics is flexibile while the other is rigid. While you're eating, a friend stops by and starts to tease you about the effect of your diet on the level of cholesterol in your blood, which brings up the questions, "What is cholesterol?" and "Why do so many people worry about it?"
Answers to each of these questions fall within the realm of a field known as organic chemistry. For more than 200 years, chemists have divided materials into two categories. Those isolated from plants and animals were classified as organic, while those that trace back to minerals were inorganic. At one time, chemists believed that organic compounds were fundamentally different from those that were inorganic because organic compounds contained a vital force that was only found in living systems.
The first step in the decline of the vital force theory occurred in 1828, when Friederich Wohler synthesized urea from inorganic starting materials. Wohler was trying to make ammonium cyanate (NH4OCN) from silver cyanate (AgOCN) and ammonium chloride (NH4Cl). What he expected is described by the following equation.
AgOCN(aq) + NH4Cl(aq) AgCl(s) + NH4OCN(aq)
The product he isolated from this reaction had none of the properties of cyanate compounds. It was a white, crystalline material that was identical to urea, H2NCONH2, which could be isolated from urine.
Neither Wohler nor his contemporaries claimed that his results disproved the vital force theory. But his results set in motion a series of experiments that led to the synthesis of a variety of organic compounds from inorganic starting materials. This inevitably led to the disappearance of "vital force" from the list of theories that had any relevance to chemistry, although it did not lead to the death of the theory, which still had proponents more than 90 years later.
If the difference between organic and inorganic compounds isn't the presence of some
mysterious vital force required for their synthesis, what is the basis for distinguishing
between these classes of compounds? Most compounds extracted from living organisms contain
carbon. It is therefore tempting to identify organic chemistry as the chemistry of carbon.
But this definition would include compounds such as calcium carbonate (CaCO3),
as well as the elemental forms of carbon
diamond
and graphite
that are clearly inorganic.
We will therefore define organic chemistry as the chemistry of compounds that contain
both carbon and hydrogen.
Even though organic chemistry focuses on compounds that contain carbon and hydrogen, more than 95% of the compounds that have isolated from natural sources or synthesized in the laboratory are organic. The special role of carbon in the chemistry of the elements is the result of a combination of factors, including the number of valence electrons on a neutral carbon atom, the electronegativity of carbon, and the atomic radius of carbon atoms (see the table below).
The Physical Properties of Carbon
| Electronic configuration | 1s2 2s2 2p2 | |
| Electronegativity | 2.55 | |
| Covalent radius | 0.077 nm |
Carbon has four valence electrons
2s2
2p2
and it must
either gain four electrons or lose four electrons to reach a rare-gas configuration. The
electronegativity of carbon is too small for carbon to gain electrons from most elements
to form C4- ions, and too large for carbon to lose electrons to form C4+
ions. Carbon therefore forms covalent bonds with a large number of other elements,
including the hydrogen, nitrogen, oxygen, phosphorus, and sulfur found in living systems.
Because they are relatively small, carbon atoms can come close enough together to form
strong C=C double bonds or even C
C
triple bonds. Carbon also forms strong double and triple bonds to nitrogen and oxygen. It
can even form double bonds to elements such as phosphorus or sulfur that do not form
double bonds to themselves.
Several years ago, the unmanned Viking spacecraft carried out experiments designed to search for evidence of life on Mars. These experiments were based on the assumption that living systems contain carbon, and the absence of any evidence for carbon-based life on that planet was presumed to mean that no life existed. Several factors make carbon essential to life.
- The ease with which carbon atoms form bonds to other carbon atoms.
- The strength of C
C single bonds and the
covalent bonds carbon forms to other nonmetals, such as N, O, P, and S. - The ability of carbon to form multiple bonds to other nonmetals, including C, N, O, P, and S atoms.
These factors provide an almost infinite variety of potential structures for organic compounds, such as vitamin C shown in the figure below.
No other element can provide the variety of combinations and permutations necessary for life to exist.
The Saturated Hydrocarbons, or Alkanes
Compounds that contain only carbon and hydrogen are known as hydrocarbons. Those that contain as many hydrogen atoms as possible are said to be saturated. The saturated hydrocarbons are also known as alkanes.
The simplest alkane is methane: CH4. The Lewis structure of methane can be generated by combining the four electrons in the valence shell of a neutral carbon atom with four hydrogen atoms to form a compound in which the carbon atom shares a total of eight valence electrons with the four hydrogen atoms.

Methane is an example of a general rule that carbon is tetravalent; it
forms a total of four bonds in almost all of its compounds. To minimize the repulsion
between pairs of electrons in the four C
H
bonds, the geometry around the carbon atom is tetrahedral, as shown in the figure below.
|
Use the fact that carbon is usually tetravalent to predict the formula of ethane, the alkane that contains two carbon atoms. |
The alkane that contains three carbon atoms is known as propane, which has the formula C3H8 and the following skeleton structure.
The four-carbon alkane is butane, with the formula C4H10.
The names, formulas, and physical properties for a variety of alkanes with the generic formula CnH2n+2 are given in the table below. The boiling points of the alkanes gradually increase with the molecular weight of these compounds. At room temperature, the lighter alkanes are gases; the midweight alkanes are liquids; and the heavier alkanes are solids, or tars.
The Saturated Hydrocarbons, or Alkanes
| Name | Molecular Formula |
Melting Point (oC) |
Boiling Point (oC) |
State at 25oC |
||||
| methane | CH4 | -182.5 | -164 | gas | ||||
| ethane | C2H6 | -183.3 | -88.6 | gas | ||||
| propane | C3H8 | -189.7 | -42.1 | gas | ||||
| butane | C4H10 | -138.4 | -0.5 | gas | ||||
| pentane | C5H12 | -129.7 | 36.1 | liquid | ||||
| hexane | C6H14 | -95 | 68.9 | liquid | ||||
| heptane | C7H16 | -90.6 | 98.4 | liquid | ||||
| octane | C8H18 | -56.8 | 124.7 | liquid | ||||
| nonane | C9H20 | -51 | 150.8 | liquid | ||||
| decane | C10H22 | -29.7 | 174.1 | liquid | ||||
| undecane | C11H24 | -24.6 | 195.9 | liquid | ||||
| dodecane | C12H26 | -9.6 | 216.3 | liquid | ||||
| eicosane | C20H42 | 36.8 | 343 | solid | ||||
| triacontane | C30H62 | 65.8 | 449.7 | solid |
The alkanes in the table above are all straight-chain hydrocarbons, in which the carbon atoms form a chain that runs from one end of the molecule to the other. The generic formula for these compounds can be understood by assuming that they contain chains of CH2 groups with an additional hydrogen atom capping either end of the chain. Thus, for every n carbon atoms there must be 2n + 2 hydrogen atoms: CnH2n+2.
Because two points define a line, the carbon skeleton of the ethane molecule is linear, as shown in the figure below.
Because the bond angle in a tetrahedron is 109.5, alkanes molecules that contain three or four carbon atoms can no longer be thought of as "linear," as shown in the figure below.
![]() |
![]() |
||
| Propane | Butane |
In addition to the straight-chain examples considered so far, alkanes also form branched structures. The smallest hydrocarbon in which a branch can occur has four carbon atoms. This compound has the same formula as butane (C4H10), but a different structure. Compounds with the same formula and different structures are known as isomers (from the Greek isos, "equal," and meros, "parts"). When it was first discovered, the branched isomer with the formula C4H10 was therefore given the name isobutane.
Isobutane
The best way to understand the difference between the structures of butane and isobutane is to compare the ball-and-stick models of these compounds shown in the figure below.
![]() |
![]() |
||
| Butane | Isobutane |
Butane and isobutane are called constitutional isomers because they literally differ in their constitution. One contains two CH3 groups and two CH2 groups; the other contains three CH3 groups and one CH group.
There are three constitutional isomers of pentane, C5H12. The first is "normal" pentane, or n-pentane.
A branched isomer is also possible, which was originally named isopentane. When a more highly branched isomer was discovered, it was named neopentane (the new isomer of pentane).
Ball-and-stick models of the three isomers of pentane are shown in the figure below.
![]() |
![]() |
||
| n-Pentane | Isopentane |
![]() |
| Neopentane |
| Practice Problem 2: The following structures all have the same molecular formula: C6H14. Which of these structures represent the same molecule?
|
| Practice Problem 3: Determine the number of constitutional isomers of hexane, C6H14. |
There are two constitutional isomers with the formula C4H10, three isomers of C5H12, and five isomers of C6H14. The number of isomers of a compound increases rapidly with additional carbon atoms. There are over 4 billion isomers for C30H62, for example.
If the carbon chain that forms the backbone of a straight-chain hydrocarbon is long
enough, we can envision the two ends coming together to form a cycloalkane.
One hydrogen atom has to be removed from each end of the hydrocarbon chain to form the C
C bond that closes the ring. Cycloalkanes
therefore have two less hydrogen atoms than the parent alkane and a generic formula of CnH2n.
The smallest alkane that can form a ring is cyclopropane, C3H6,
in which the three carbon atoms lie in the same plane. The angle between adjacent C
C bonds is only 60, which is very much
smaller than the 109.5 angle in a tetrahedron, as shown in the figure below.
Cyclopropane is therefore susceptible to chemical reactions that can open up the three-membered ring.
Any attempt to force the four carbons that form a cyclobutane ring into a plane of
atoms would produce the structure shown in the figure below, in which the angle between
adjacent C
C bonds would be 90.
One of the four carbon atoms in the cyclobutane ring is therefore displaced from the plane of the other three to form a "puckered" structure that is vaguely reminiscent of the wings of a butterfly.
The angle between adjacent C
C bonds in
a planar cyclopentane molecule would be 108, which is close to the ideal angle around a
tetrahedral carbon atom. Cyclopentane is not a planar molecule, as shown in the figure
below, because displacing two of the carbon atoms from the plane of the other three
produces a puckered structure that relieves some of the repulsion between the hydrogen
atoms on adjacent carbon atoms in the ring.
By the time we get to the six-membered ring in cyclohexane, a puckered structure can be formed by displacing a pair of carbon atoms at either end of the ring from the plane of the other four members of the ring. One of these carbon atoms is tilted up, out of the ring, whereas the other is tilted down to form the "chair" structure shown in the figure below.
As one looks at the structure of the ethane molecule, it is easy to fall into the trap
of thinking about this molecule as if it was static. Nothing could be further from the
truth. At room temperature, the average velocity of an ethane molecule is about 500 m/s
more than twice the speed of a Boeing 747.
While it moves through space, the molecule is tumbling around its center of gravity like
an airplane out of control. At the same time, the C
H and C
C bonds are vibrating
like a spring at rates as fast as 9 x 1013 s-1.
There is another way in which the ethane molecule can move. The CH3 groups
at either end of the molecule can rotate with respect to each around the C
C bond. When this happens, the molecule passes
through an infinite number of conformations that have slightly different
energies. The highest energy conformation corresponds to a structure in which the hydrogen
atoms are "eclipsed." If we view the molecule along the C
C bond, the hydrogen atoms on one CH3
group would obscure those on the other, as shown in the figure below.
The lowest energy conformation is a structure in which the hydrogen atoms are "staggered," as shown in the figure below.
The difference between the eclipsed and staggered conformations of ethane are best
illustrated by viewing these molecules along the C
C bond, as shown in the figure below.
![]() |
![]() |
||
| Eclipsed | Staggered |
The difference between the energies of these conformations is relatively
small, only about 12 kJ/mol. But it is large enough that rotation around the C
C bond is not smooth. Although the frequency
of this rotation is on the order of 1010 revolutions per second, the ethane
molecule spends a slightly larger percentage of the time in the staggered conformation.
The different conformations of a molecule are often described in terms of Newman
projections. These line drawings show the six substituents on the C
C bond as if the structure of the molecule was
projected onto a piece of paper by shining a bright light along the C
C bond in a ball-and-stick model of the
molecule. Newman projections for the different staggered conformations of butane are shown
in the figure below.

Because of the ease of rotation around C
C bonds, there are several conformations of some of the cycloalkanes
described in the previous section. Cyclohexane, for example, forms both the
"chair" and "boat" conformations shown in the figure below.
![]() |
![]() |
||
| Chair | Boat |
The difference between the energies of the chair conformation, in which the hydrogen atoms are staggered, and the boat conformation, in which they are eclipsed, is about 30 kJ/mol. As a result, even though the rate at which these two conformations interchange is about 1 x 105 s-1, we can assume that most cyclohexane molecules at any moment in time are in the chair conformation.
Common names such as pentane, isopentane, and neopentane are sufficient to differentiate between the three isomers with the formula C5H12. They become less useful, however, as the size of the hydrocarbon chain increases.
The International Union of Pure and Applied Chemistry (IUPAC) has developed a systematic approach to naming alkanes and cycloalkanes based on the following steps.
- Find the longest continuous chain of carbon atoms in the skeleton structure. Name the compound as a derivative of the alkane with this number of carbon atoms. The following compound, for example, is a derivative of pentane because the longest chain contains five carbon atoms.
- Name the substituents on the chain. Substituents derived from alkanes are named by replacing the -ane ending with -yl. This compound contains a methyl (CH3-) substituent.
- Number the chain starting at the end nearest the first substituent and specify the carbon atoms on which the substituents are located. Use the lowest possible numbers. This compound, for example, is 2-methylpentane, not 4-methylpentane.
- Use the prefixes di-, tri-, and tetra- to describe substituents that are found two, three, or four times on the same chain of carbon atoms.
- Arrange the names of the substituents in alphabetical order.
| Practice Problem 4: Name the following compound.
|
| Practice Problem 5: Name the following compound.
|
The Unsaturated Hydrocarbons: Alkenes and Alkynes
Carbon not only forms the strong C
C
single bonds found in alkanes, it also forms strong C=C double bonds. Compounds that
contain C=C double bonds were once known as olefins (literally, "to make an
oil") because they were hard to crystallize. (They tend to remain oily liquids when
cooled.) These compounds are now called alkenes. The simplest alkene has
the formula C2H4 and the following Lewis structure.
The relationship between alkanes and alkenes can be understood by
thinking about the following hypothetical reaction. We start by breaking the bond in an H2
molecule so that one of the electrons ends up on each of hydrogen atoms. We do the same
thing to one of the bonds between the carbon atoms in an alkene. We then allow the
unpaired electron on each hydrogen atom to interact with the unpaired electron on a carbon
atom to form a new C
H bond.
Thus, in theory, we can transform an alkene into the parent alkane by adding an H2 molecule across a C=C double bond. In practice, this reaction only occurs at high pressures in the presence of a suitable catalyst, such as piece of nickel metal.
Because an alkene can be thought of as a derivative of an alkane from which an H2 molecule has been removed, the generic formula for an alkene with one C=C double bond is CnH2n.
Alkenes are examples of unsaturated hydrocarbons because they have fewer hydrogen atoms than the corresponding alkanes. They were once named by adding the suffix -ene to the name of the substituent that carried the same number of carbon atoms.
The IUPAC nomenclature for alkenes names these compounds as derivatives of the parent alkanes. The presence of the C=C double bond is indicated by changing the -ane ending on the name of the parent alkane to -ene.
The location of the C=C double bond in the skeleton structure of the compound is indicated by specifying the number of the carbon atom at which the C=C bond starts.
The names of substituents are then added as prefixes to the name of the alkene.
| Practice Problem 6: Name the following compound.
|
Compounds that contain C
C triple bonds are called alkynes. These compounds have four
less hydrogen atoms than the parent alkanes, so the generic formula for an alkyne with a
single C
C
triple bond is CnH2n-2. The simplest alkyne has the formula C2H2
and is known by the common name acetylene.
The IUPAC nomenclature for alkynes names these compounds as derivatives of the parent alkane, with the ending -yne replacing -ane.
In addition to compounds that contain one double bond (alkenes) or one triple bond (alkynes), we can also envision compounds with two double bonds (dienes), three double bonds (trienes), or a combination of double and triple bonds.
Organic Chemistry: Structure and Nomenclature of Hydrocarbons
Structure and Nomenclature of Hydrocarbons | Isomers | The Reactions of Alkanes, Alkenes, and Alkynes | Hydrocarbons | Petroleum and Coal | Chirality and Optical Activity
Periodic Table | Periodic Table | Glossary | Cool Applets
Gen Chem Topic Review | General Chemistry Help Homepage | Search: The general chemistry web site.
THINGS THAT DISTURB ME
Poverty in India
From Wikipedia, the free encyclopedia
In spite the middle class has gained from recent positive economic developments, India suffers from substantial poverty. The Planning Commission, which is the nodal official agency for poverty estimation, has estimated that 27.5% of the population was living below the poverty line in 2004–2005, down from 51.3% in 1977–1978, and 36% in 1993-1994[1]. The source for this was the 61st round of the National Sample Survey (NSS) and the criterion used was monthly per capita consumption expenditure below Rs. 356.35 for rural areas and Rs. 538.60 for urban areas. 75% of the poor are in rural areas with most of them comprising daily wagers, self-employed households and landless labourers. Despite a veneer of expansionist, "superpower" government priorities like nuclear weapons and a space program, India has more than 836 million people living on less that 50 cents ($2 in PPP) a day according to a recent report.
Contents[hide] |
[edit] Causes of poverty in India
There are at least two main schools of thought regarding the causes of poverty in India and the developing world in general.
The Developmentalist View
Colonial Economic Restructuring
Prime Minister Jawaharlal Nehru noted, "A significant fact which stands out is that those parts of India which have been longest under British rule are the poorest today." The Indian economy was purposely and severely deindustrialized (especially in the areas of textiles and metal-working) through colonial privatizations, regulations, tariffs on manufactured or refined Indian goods, taxes, and direct seizures.[2]
In 1830, India accounted for 17.6% of global industrial production against Britain's 9.5%, but by 1900 India's share was down to 1.7% against Britain's 18.5%. (The change in industrial production per capita is even more extreme due to Indian population growth.) [3]
Not only was Indian industry losing out, but consumers were forced to rely on expensive (open monopoly produced) British manufactured goods, especially as barter, local crafts and subsistence agriculture was discouraged by law. The agriculutural raw materials exported by Indians were subject to massive price swings and declining terms of trade.
Mass Hunger
British policies in India exacerbated weather conditions to lead to mass famines which, when taken together, lead to between 30 to 60 million deaths from starvation in the Indian colonies. Community grain banks were forcibly disabled, land was converted from food crops for local consumption to cotton, opium, tea, and grain for export, largely for animal feed. [4]
In summary, deindustrialization, declining terms of trade, and the periodic mass misery of man-made famines are the major ways in which colonial government destroyed development in India and held it back for centuries.
The Neoliberal View
- Unemployment and Under-employment, arrising in part from protectionist policies pursued till 1991 that prevented high foreign investment. Poverty also decreased from the early 80s to 1990 significantly however[2][3]
- Lack of property rights. The right to property is not a fundamental right in India.
- Over-reliance on agriculture. There is a surplus of labour in agriculture. Farmers are a large vote bank and use their votes to resist reallocation of land for higher-income industrial projects. While services and industry have grown at double digit figures, agriculture growth rate has dropped from 4.8% to 2%. Neoliberals tend to view food security as an unnecessary goal compared to purely financial economic growth.
There are also a variety of more direct technical factors:
- About 60% of the population depends on agriculture whereas the contribution of agriculture to the GDP is about 28%[5].
- High population growth rate, although demographers generally agree that this is a symptom rather than cause of poverty.
And a few cultural ones have been proposed:
- The caste system, under which hundreds of millions of Indians were kept away from educational, ownership, and employment opportunities, and subjected to violence for "getting out of line." . British rulers encouraged caste privileges and customs were encouraged, at least before the 20th century.[6]
Despite this, India currently adds 40 million people to its middle class every year. Analysts such as the founder of "Forecasting International", Marvin J. Cetron writes that an estimated 300 million Indians now belong to the middle class; one-third of them have emerged from poverty in the last ten years. At the current rate of growth, a majority of Indians will be middle-class by 2025. Literacy rates have risen from 52 percent to 65 percent in the same period.[4]
[edit] Historical trends in poverty statistics
The proportion of India's population below the poverty line has fluctuated widely in the past, but the overall trend has been downward. However, there have been roughly three periods of trends in income poverty.
1950 to mid-1970s: Income poverty reduction shows no discernible trend. In 1951, 47% of India's rural population was below the poverty line. The proportion went up to 64% in 1954-55; it came down to 45% in 1960-61 but in 1977-78, it went up again to 51%.
Mid-1970s to 1990: Income poverty declined significantly between the mid-1970s and the end of the 1980s. The decline was more pronounced between 1977-78 and 1986-87, with rural income poverty declining from 51% to 39%. It went down further to 34% by 1989-90. Urban income poverty went down from 41% in 1977-78 to 34% in 1986-87, and further to 33% in 1989-90.
After 1991: This post-economic reform period evidenced both setbacks and progress. Rural income poverty increased from 34% in 1989-90 to 43% in 1992 and then fell to 37% in 1993-94. Urban income poverty went up from 33.4% in 1989-90 to 33.7% in 1992 and declined to 32% in 1993-94 Also,NSS data for 1994-95 to 1998 show little or no poverty reduction, so that the evidence till 1999-2000 was that poverty, particularly rural poverty, had increased post-reform. However, the official estimate of poverty for 1999-2000 was 26.1%, a dramatic decline that led to much debate and analysis. This was because for this year the NSS had adopted a new survey methodology that led to both higher estimated mean consumption and also an estimated distribution that was more equal than in past NSS surveys. The latest NSS survey for 2004-05 is fully comparable to the surveys before 1999-2000 and shows poverty at 48.3% in rural areas, 25.7% in urban areas and 27.5% for the country as a whole. Thus, poverty has declined after 1998, although it is still being debated whether there was any significant poverty reduction between 1989-90 and 1999-00. The latest NSS survey was so designed as to also give estimates roughly, but not fully, comparable to the 1999-2000 survey. These suggest that most of the decline in rural poverty over the period during 1993-94 to 2004-05 actually occurred after 1999-2000.
In summary, the official poverty rates recorded by NSS are:
| Year | Round | Poverty Rate (%) | Poverty Reduction per year(%) |
|---|---|---|---|
| 1977-78 | 32 | 51.3 | |
| 1983 | 38 | 44.5 | 1.3 |
| 1987-88 | 43 | 38.9 | 1.2 |
| 1993-94 | 50 | 36.0 | 0.5 |
| 1999-2000 | 55 | (26.09) | not comparable |
| 2004-2005 | 61 | 27.5 | 0.8 |
[edit] History of attempts to alleviate poverty
Since the early 1950s, government has initiated, sustained, and refined various planning schemes to help the poor attain self sufficiency in food production. Probably the most important initiative has been the supply of basic commodities, particularly food at controlled prices, available throughout the country as poor spend about 80 percent of their income on food.
Programmes like Food for work and National Rural Employment Programme have attempted to use the unemployed to generate productive assets and build rural infrastructure.[3] Other anti poverty programs include Rural Landless Employment Guarantee Programme.
The Rural Landless Employment Guarantee Programme was instituted in FY 1983 to address the plight of the hard-core rural poor by expanding employment opportunities and building the rural infrastructure as a means of encouraging rapid economic growth. There were many problems with the implementation of these and otherschemes, but observers credit them with helping reduce poverty. To improve the effectiveness of the National Rural Employment Programme, in 1989 it was combined with the Rural Landless Employment Guarantee Programme and renamed Jawahar Rozgar Yojana, or Jawahar Employment Plan (see Development Programs, ch. 7).
In August 2005, the Indian Parliament passed the Rural Employment Guarantee Bill, the largest programme of this type in terms of cost and coverage, which promises 100 days of minimum wage employment to every rural household, in 200 of India's 600 districts. The question of whether economic reforms have reduced poverty or not has fueled debates without generating any clearcut answers, and has also put political pressure on further economic reforms, especially those involving downsizing of labour and reduction of agricultural subsidies.[2][5]
[edit] Outlook for poverty alleviation
Eradication of poverty in India can only be a long-term goal. Poverty alleviation is expected to make better progress in the next 50 years than in the past, as a trickle-down effect of the growing middle class. Increasing stress on education, reservation of seats in government jobs and the increasing empowerment of women and the economically weaker sections of society, are also expected to contribute to the alleviation of poverty. It is incorrect to say that all poverty reduction programmes have failed. The growth of the middle class (which was virtually non-existent when India became a free nation in August 1947) indicates that economic prosperity has indeed been very impressive in India, but the distribution of wealth is not at all even.
India currently adds 40 million people to its middle class every year. Analysts such as the founder of "Forecasting International", Marvin J. Cetron writes that an estimated 300 million Indians now belong to the middle class; one-third of them have emerged from poverty in the last ten years. At the current rate of growth, a majority of Indians will be middle-class by 2025. Literacy rates have risen from 52 percent to 65 percent during the initial decade of liberalization (1991-2001).[citation needed]
[edit] Controversy over extent of poverty reduction
While total overall poverty in India has declined, the extent of poverty reduction is often debated. While there is a consensus that there has not been increase in poverty between 1993-94 and 2004-05, the picture is not so clear if one considers other non-pecuniary dimensions (such as health, education, crime and access to infrastructure). With the rapid economic growth that India is experiencing, it is likely that a significant fraction of the rural population will continue to migrate toward cities, making the issue of urban poverty more significant in the long run [6].
Economist Pravin Visaria has defended the validity of many of the statistics that demonstrated the reduction in overall poverty in India, as well as the declaration made by India's Finance Minister Yashwant Sinha that poverty in India has reduced significantly. He insisted that the 1999-2000 survey was well designed and supervised and felt that just because they did not appear to fit preconceived notions about poverty in India, they should not be dismissed outright[7]. Nicholas Stern, vice president of the World Bank, has published defenses of the poverty reduction statistics. He argues that increasing globalization and investment opportunities have contributed significantly to the reduction of poverty in the country. India, together with China, have shown the clearest trends of globalization with the accelerated rise in per-capita income.[8].
A 2007 report by the state-run National Commission for Enterprises in the Unorganised Sector (NCEUS) found that 77% of Indians, or 836 million people, lived on less than 20 rupees per day (USD 0.50 nominal, USD 2.0 in PPP), with most working in "informal labour sector with no job or social security, living in abject poverty."
Thursday, November 5, 2009
Friday, October 26, 2007
फ १६
Orthographically projected diagram of the F-16.
Data from USAF sheet,[22] AerospaceWeb[23]
General characteristics
* Crew: 1
* Length: 49 ft 5 in (14.8 m)
* Wingspan: 32 ft 8 in (9.8 m)
* Height: 16 ft (4.8 m)
* Wing area: 300 ft² (27.87 m²)
* Airfoil: NACA 64A204 root and tip
* Empty weight: 18,238 lb (8,272 kg)
* Loaded weight: 26,463 lb (12,003 kg)
* Max takeoff weight: 42,300 lb (16,875 kg)
* Powerplant: 1× Pratt & Whitney F100-PW-220 afterburning turbofan
o Dry thrust: 14,590 lbf (64.9 kN)
o Thrust with afterburner: 23,770 lbf (105.7 kN)
* Alternate powerplant: 1× General Electric F110-GE-100 afterburning turbofan
o Dry thrust: 17,155 lbf (76.3 kN)
o Thrust with afterburner: 28,985 lbf (128.9 kN)
Performance
* Maximum speed:
o At sea level: Mach 1.2 (915 mph, 1,460 km/h)
o At altitude: Mach 2+ (1,500 mph, 2,414 km/h)
* Combat radius: 340 mi (295 NM, 550 km) on a hi-lo-hi mission with six 1,000 lb (450 kg) bombs
* Ferry range: >3,200 mi (2,800 NM, 4,800 km)
* Service ceiling: >50,000 ft (15,239 m)
* Rate of climb: 50,000 ft/min (254 m/s)
* Wing loading: 88.2 lb/ft² (431 kg/m²)
* Thrust/weight: For F100 engine: 0.898, For F110: 1.095
M61A1 on display.
M61A1 on display.
Armament
* Guns: 1× 20 mm (0.787 in) M61 Vulcan gatling gun, 511 rounds
* Rockets: 2¾ in (70 mm) CRV7
* Missiles:
o Air-to-air missiles:
+ 2× AIM-7 Sparrow or
+ 6× AIM-9 Sidewinder or
+ 6× AIM-120 AMRAAM or
+ 6× Python-4
o Air-to-ground missiles:
+ 6× AGM-45 Shrike or
+ 6× AGM-65 Maverick or
+ 6× AGM-88 HARM
o Anti-ship missiles:
+ 4× AGM-84 Harpoon or
+ 4× AGM-119 Penguin
* Bombs:
o 2× CBU-87 Combined Effects Munition
o 2× CBU-89 Gator mine
o 2× CBU-97 Sensor Fuzed Weapon
o Wind Corrected Munitions Dispenser capable
o 4× GBU-10 Paveway
o 6× GBU-12 Paveway II
o 6× Paveway-series laser-guided bombs
o 4× JDAM
o 4× Mk 80 series
o B61 nuclear bomb
[edit] Popular culture
The F-16 can be seen in movies such as Blue Thunder, Jewel Of The Nile, the Iron Eagle series, X2, and The Sum Of All Fears. It also appears, in a more negative light, in the 1992 TV movie Afterburn.
Due to its widespread adoption, the F-16 has been a popular model for computer flight simulators, appearing in over twenty games. Some of them are: Falcon series (1987-2005), F-16 Fighting Falcon (1984), Jet (1989), Strike Commander (1993), iF-16 (1997), F-16 Multi-role Fighter (1998), F-16 Aggressor (1999), and Thrustmaster "HOTAS Cougar" flight simulator controller (exacting reproduction of those found in the F-16 Block 40/50). The F-16 is also one of two airplanes available in the built-in flight simulator in Google Earth.
QUANTUM MECHANICS
Quantum Mechanics
Quantum mechanics is, at least at first glance and at least in part, a mathematical machine for predicting the behaviors of microscopic particles — or, at least, of the measuring instruments we use to explore those behaviors — and in that capacity, it is spectacularly successful: in terms of power and precision, head and shoulders above any theory we have ever had. Mathematically, the theory is well understood; we know what its parts are, how they are put together, and why, in the mechanical sense (i.e., in a sense that can be answered by describing the internal grinding of gear against gear), the whole thing performs the way it does, how the information that gets fed in at one end is converted into what comes out the other. The question of what kind of a world it describes, however, is controversial; there is very little agreement, among physicists and among philosophers, about what the world is like according to quantum mechanics. Minimally interpreted, the theory describes a set of facts about the way the microscopic world impinges on the macroscopic one, how it affects our measuring instruments, described in everyday language or the language of classical mechanics. Disagreement centers on the question of what a microscopic world, which affects our apparatuses in the prescribed manner, is, or even could be, like intrinsically; or how those apparatuses could themselves be built out of microscopic parts of the sort the theory describes.[1]
That is what an interpretation of the theory would provide: a proper account of what the world is like according to quantum mechanics, intrinsically and from the bottom up. The problems with giving an interpretation (not just a comforting, homey sort of interpretation, i.e., not just an interpretation according to which the world isn't too different from the familiar world of common sense, but any interpretation at all) are dealt with in other sections of this encyclopedia. Here, we are concerned only with the mathematical heart of the theory, the theory in its capacity as a mathematical machine, and — whatever is true of the rest of it — this part of the theory makes exquisitely good sense
1. Terminology
Physical systems are divided into types according to their unchanging (or ‘state-independent’) properties, and the state of a system at a time consists of a complete specification of those of its properties that change with time (its ‘state-dependent’ properties). To give a complete description of a system, then, we need to say what type of system it is and what its state is at each moment in its history.A physical quantity is a mutually exclusive and jointly exhaustive family of physical properties (for those who know this way of talking, it is a family of properties with the structure of the cells in a partition). Knowing what kinds of values a quantity takes can tell us a great deal about the relations among the properties of which it is composed. The values of a bivalent quantity, for instance, form a set with two members; the values of a real-valued quantity form a set with the structure of the real numbers. This is a special case of something we will see again and again, viz., that knowing what kind of mathematical objects represent the elements in some set (here, the values of a physical quantity; later, the states that a system can assume, or the quantities pertaining to it) tells us a very great deal (indeed, arguably, all there is to know) about the relations among them.
In quantum mechanical contexts, the term ‘observable’ is used interchangeably with ‘physical quantity’, and should be treated as a technical term with the same meaning. It is no accident that the early developers of the theory chose the term, but the choice was made for reasons that are not, nowadays, generally accepted. The state-space of a system is the space formed by the set of its possible states,[2] i.e., the physically possible ways of combining the values of quantities that characterize it internally. In classical theories, a set of quantities which forms a supervenience basis for the rest is typically designated as ‘basic’ or ‘fundamental’, and, since any mathematically possible way of combining their values is a physical possibility, the state-space can be obtained by simply taking these as coordinates.[3] So, for instance, the state-space of a classical mechanical system composed of n particles, obtained by specifying the values of 6n real-valued quantities — three components of position, and three of momentum for each particle in the system — is a 6n-dimensional coordinate space. Each possible state of such a system corresponds to a point in the space, and each point in the space corresponds to a possible state of such a system. The situation is a little different in quantum mechanics, where there are mathematically describable ways of combining the values of the quantities that don't represent physically possible states. As we will see, the state-spaces of quantum mechanics are special kinds of vector spaces, known as Hilbert spaces, and they have more internal structure than their classical counterparts.
A structure is a set of elements on which certain operations and relations are defined, a mathematical structure is just a structure in which the elements are mathematical objects (numbers, sets, vectors) and the operations mathematical ones, and a model is a mathematical structure used to represent some physically significant structure in the world.
The heart and soul of quantum mechanics is contained in the Hilbert spaces that represent the state-spaces of quantum mechanical systems. The internal relations among states and quantities, and everything this entails about the ways quantum mechanical systems behave, are all woven into the structure of these spaces, embodied in the relations among the mathematical objects which represent them.[4] This means that understanding what a system is like according to quantum mechanics is inseparable from familiarity with the internal structure of those spaces. Know your way around Hilbert space, and become familiar with the dynamical laws that describe the paths that vectors travel through it, and you know everything there is to know, in the terms provided by the theory, about the systems that it describes.
By ‘know your way around’ Hilbert space, I mean something more than possess a description or a map of it; anybody who has a quantum mechanics textbook on their shelf has that. I mean know your way around it in the way you know your way around the city in which you live. This is a practical kind of knowledge that comes in degrees and it is best acquired by learning to solve problems of the form: How do I get from A to B? Can I get there without passing through C? And what is the shortest route? Graduate students in physics spend long years gaining familiarity with the nooks and crannies of Hilbert space, locating familiar landmarks, treading its beaten paths, learning where secret passages and dead ends lie, and developing a sense of the overall lay of the land. They learn how to navigate Hilbert space in the way a cab driver learns to navigate his city.
How much of this kind of knowledge is needed to approach the philosophical problems associated with the theory? In the beginning, not very much: just the most general facts about the geometry of the landscape (which is, in any case, unlike that of most cities, beautifully organized), and the paths that (the vectors representing the states of) systems travel through them. That is what will be introduced here: first a bit of easy math, and then, in a nutshell, the theory.
2. Mathematics
Vectors and vector spaces
A vector A, written ‘|A>’, is a mathematical object characterized by a length, |A|, and a direction. A normalized vector is a vector of length 1; i.e., |A| = 1. Vectors can be added together, multiplied by constants (including complex numbers), and multiplied together. Vector addition maps any pair of vectors onto another vector, specifically, the one you get by moving the second vector so that it's tail coincides with the tip of the first, without altering the length or direction of either, and then joining the tail of the first to the tip of the second. This addition rule is known as the parallelogram law. So, for example, adding vectors |A> and |B> yields vector |C> (= |A> + |B>) as in Figure 1:Multiplying a vector |A> by n, where n is a constant, gives a vector which is the same direction as |A> but whose length is n times |A>'s length.
Figure 1: Vector Addition
In a real vector space, the (inner or dot) product of a pair of vectors |A> and |B>, written ‘’ is a scalar equal to the product of their lengths (or ‘norms’) times the cosine of the angle, θ, between them:
= |A| |B| cos θLet |A1> and |A2> be vectors of length 1 ("unit vectors") such that 1|A2> = 0. (So the angle between these two unit vectors must be 90 degrees.) Then we can represent an arbitrary vector |B> in terms of our unit vectors as follows:
|B> = b1|A1> + b2|A2>For example, here is a graph which shows how |B> can be represented as the sum of the two unit vectors |A1> and |A2>:
Figure 2: Representing |B> by Vector Addition of Unit Vectors
Now the definition of the inner product has to be modified to apply to complex spaces. Let c* be the complex conjugate of c. (When c is a complex number of the form a ± bi, then the complex conjugate c* of c is defined as follows:
[a + bi]* = a − biSo, for all complex numbers c, [c*]* = c, but c* = c just in case c is real.) Now definition of the inner product of |A> and |B> for complex spaces can be given in terms of the conjugates of complex coefficients as follows. Where |A1> and |A2> are the unit vectors described earlier, |A> = a1|A1> + a2|A2> and |B> = b1|A1> + b2|A2>, then
[a − bi]* = a + bi
= (a1*)(b1) + (a2*)(b2)
The most general and abstract notion of an inner product, of which we've now defined two special cases, is as follows. is an inner product on a vector space V just in case
(i) = |A|2, and =0 if and only if A=0It follows from this that(ii) = *
(iii) = + .
(i) the length of |A> is the square root of inner product of |A> with itself, i.e.,A vector space is a set of vectors closed under addition, and multiplication by constants, an inner product space is a vector space on which the operation of vector multiplication has been defined, and the dimension of such a space is the maximum number of nonzero, mutually orthogonal vectors it contains.|A| = √,and(ii) |A> and |B> are mutually perpendicular, or orthogonal, if, and only if, = 0.
Any collection of N mutually orthogonal vectors of length 1 in an N-dimensional vector space constitutes an orthonormal basis for that space. Let |A1>, ... , |AN> be such a collection of unit vectors. Then every vector in the space can be expressed as a sum of the form:
|B> = b1|A1> + b2|A2> + ... + bN|AN>,where bi = i>. The bi's here are known as B's expansion coefficients in the A-basis.[5]
Notice that:
(i) for all vectors A, B, and C in a given space,There is another way of writing vectors, namely by writing their expansion coefficients (relative to a given basis) in a column, like so:= +(ii) for any vectors M and Q, expressed in terms of the A-basis,
|M> + |Q> = (m1 + q1)|A1> + (m2 + q2)|A2> + ... + (mN + qN)|AN>,
and
= m1q1 + m2q2 + ... + mnqn
where qi =
|Q> = q1
q2
i> and the Ai are the chosen basis vectors.When we are dealing with vector spaces of infinite dimension, since we can't write the whole column of expansion coefficients needed to pick out a vector since it would have to be infinitely long, so instead we write down the function (called the ‘wave function’ for Q, usually represented ψ(i)) which has those coefficients as values. We write down, that is, the function:
ψ(i) = qi =Given any vector in, and any basis for, a vector space, we can obtain the wave-function of the vector in that basis; and given a wave-function for a vector, in a particular basis, we can construct the vector whose wave-function it is. Since it turns out that most of the important operations on vectors correspond to simple algebraic operations on their wave-functions, this is the usual way to represent state-vectors.i>When a pair of physical systems interact, they form a composite system, and, in quantum mechanics as in classical mechanics, there is a rule for constructing the state-space of a composite system from those of its components, a rule that tells us how to obtain, from the state-spaces, HA and HB for A and B, respectively, the state-space — called the ‘tensor product’ of HA and HB, and written HA
HB — of the pair. There are two important things about the rule; first, so long as HA and HB are Hilbert spaces, HA
HB will be as well, and second, there are some facts about the way HA
HB relates to HA and HB, that have surprising consequences for the relations between the complex system and its parts. In particular, it turns out that the state of a composite system is not uniquely defined by those of its components. What this means, or at least what it appears to mean, is that there are, according to quantum mechanics, facts about composite systems (and not just facts about their spatial configuration) that don't supervene on facts about their components; it means that there are facts about systems as wholes that don't supervene on facts about their parts and the way those parts are arranged in space. The significance of this feature of the theory cannot be overplayed; it is, in one way or another, implicated in most of its most difficult problems.
In a little more detail: if {viA} is an orthonormal basis for HA and {ujB} is an orthonormal basis for HB, then the set of pairs (viA, ujB) is taken to form an orthonormal basis for the tensor product space HA
HB. The notation viA
ujB is used for the pair (viA,uj B), and inner product on HA
HB is defined as:[6]
<viAIt is a result of this construction that although every vector in HAumB | vjA
unB> = <viA | vjA> <umB | unB>
HB is a linear sum of vectors expressible in the form vA
uB, not every vector in the space is itself expressible in that form, and it turns out that
(i) any composite state defines uniquely the states of its components.(ii) if the states of A and B are pure (i.e., representable by vectors vA and uB, respectively), then the state of (A+B) is pure and represented by vA
uB, and
(iii) if the state of (A+B) is pure and expressible in the form vA
uB, then the states of A and B are pure, but
(iv) if the states of A and B are not pure, i.e., if they are mixed states (these are defined below), they do not uniquely define the state of (A+B); in particular, it may be a pure state not expressible in the form vA
uB.
Operators
An operator O is a mapping of a vector space onto itself; it takes any vector |B> in a space onto another vector |B′> also in the space; O|B> = |B′>. Linear operators are operators that have the following properties:(i) O(|A> + |B>) = O|A> + O|B>, andJust as any vector in an N-dimensional space can be represented by a column of N numbers, relative to a choice of basis for the space, any linear operator on the space can be represented in a column notation by N2 numbers:(ii) O(c|A>) = c(O|A>).
where Oij = <>i | O|Aj> and the |AN> are the basis vectors of the space. The effect of the linear operator O on the vector B is, then, given by
O = O11
O21O12
O22 Two more definitions before we can say what Hilbert spaces are, and then we can turn to quantum mechanics. |B> is an eigenvector of O with eigenvalue a if, and only if, O|B> = a|B>. Different operators can have different eigenvectors, but the eigenvector/operator relation depends only on the operator and vectors in question, and not on the particular basis in which they are expressed; the eigenvector/operator relation is, that is to say, invariant under change of basis. Hermitean operators are linear operators, which have only real eigenvalues.
O|B> =
= O11
O21O12
O22× b1
b2
= (O11b1 + O12b2)
(O21b1 + O22b2)
= (O11b1 + O12b2)|A1> + (O21b1 + O22b2)|A2>
= |B′> A Hilbert space, finally, is a vector space on which an inner product is defined, and which is complete, i.e., which is such that any Cauchy sequence of vectors in the space converges to a vector in the space. All finite-dimensional inner product spaces are complete, and I will restrict myself to these. The infinite case involves some complications that are not fruitfully entered into at this stage.
3. Quantum Mechanics
Four basic principles of quantum mechanics are:3.1 Physical States
Every physical system is associated with a Hilbert Space, every unit vector in the space corresponds to a possible pure state of the system, and every possible pure state, to some vector in the space.[7] In standard texts on quantum mechanics, the vector is represented by a function known as the wave-function, or ψ-function.3.2 Physical Quantities
Hermitian operators in the Hilbert space associated with a system represent physical quantities, and their eigenvalues represent the possible results of measurements of those quantities.3.3 Composition
The Hilbert space associated with a complex system is the tensor product of those associated with the simple systems (in the standard, non-relativistic, theory: the individual particles) of which it is composed.3.4 Dynamics
- Contexts of type 1: Given the state of a system at t and the forces and constraints to which it is subject, there is an equation, ‘Schrödinger's equation’, that gives the state at any other time U|vt> → |vt′>.[8] The important properties of U for our purposes are that it is deterministic, which is to say that it takes the state of a system at one time into a unique state at any other, and it is linear, which is to say that if it takes a state |A> onto the state |A′>, and it takes the state |B> onto the state |B′>, then it takes any state of the form α|A> + β|B> onto the state α|A′> + β|B′>.
- Contexts of type 2 ("Measurement Contexts"):[9] Carrying out a "measurement" of an observable B on a system in a state |A> has the effect of collapsing the system into a B-eigenstate corresponding to the eigenvalue observed. This is known as the Collapse Postulate. Which particular B-eigenstate it collapses into is a matter of probability, and the probabilities are given by a rule known as Born's Rule:
prob(bi) = |i>|2.There are two important points to note about these two kinds of contexts:
- The distinction between contexts of type 1 and 2 remains to be made out in quantum mechanical terms; nobody has managed to say in a completely satisfactory way, in the terms provided by the theory, which contexts are measurement contexts, and
- Even if the distinction is made out, it is an open interpretive question whether there are contexts of type 2; i.e., it is an open interpretive question whether there are any contexts in which systems are governed by a dynamical rule other than Schrödinger's equation.
4. Structures on Hilbert Space
I remarked above that in the same way that all the information we have about the relations between locations in a city is embodied in the spatial relations between the points on a map which represent them, all of the information that we have about the internal relations among (and between) states and quantities in quantum mechanics is embodied in the mathematical relations among the vectors and operators which represent them.[10] From a mathematical point of view, what really distinguishes quantum mechanics from its classical predecessors is that states and quantities have a richer structure; they form families with a more interesting network of relations among their members.All of the physically consequential features of the behaviors of quantum mechanical systems are consequences of mathematical properties of those relations, and the most important of them are easily summarized:
(P1) Any way of adding vectors in a Hilbert space or multiplying them by scalars will yield a vector that is also in the space. In the case that the vector is normalized, it will, from (3.1), represent a possible state of the system, and in the event that it is the sum of a pair of eigenvectors of an observable B with distinct eigenvalues, it will not itself be an eigenvector of B, but will be associated, from (3.4b), with a set of probabilities for showing one or another result in B-measurements.If we make a couple of additional interpretive assumptions, we can say more. Assume, for instance, that(P2) For any Hermitian operator on a Hilbert space, there are others, on the same space, with which it doesn't share a full set of eigenvectors; indeed, it is easy to show that there are other such operators with which it has no eigenvectors in common.
(4.1) Every Hermitian operator on the Hilbert space associated with a system represents a distinct observable, and (hence) every normalized vector, a distinct state, andIt follows from (P2), by (3.1), that no quantum mechanical state is an eigenstate of all observables (and indeed that there are observables which have no eigenstates in common), and so, by (3.2), that no quantum mechanical system ever has simultaneous values for all of the quantities pertaining to it (and indeed that there are pairs of quantities to which no state assigns simultaneous values).(4.2) A system has a value for observable A if, and only if, the vector representing its state is an eigenstate of the A-operator. The value it has, in such a case, is just the eigenvalue associated with that eigenstate.[11]
There are Hermitian operators on the tensor product H1
H2 of a pair of Hilbert spaces H1 and H2 ... In the event that H1 and H2 are the state spaces of systems S1 and S2, H1
H2 is the state-space of the complex system (S1+S2). It follows from this by (4.1) that there are observables pertaining to (S1+S2) whose values are not determined by the values of observables pertaining to the two individually.
These are all straightforward consequences of taking vectors and operators in Hilbert space to represent, respectively, states and observables, and applying Born's Rule (and later (4.1) and (4.2)), to give empirical meaning to state assignments. That much is perfectly well understood; the real difficulty in understanding quantum mechanics lies in coming to grips with their implications — physical, metaphysical, and epistemological.
There is one remaining fact about the mathematical structure of the theory that anyone trying to come to an understanding about what it says about the world has to grapple with. It is not a property of Hilbert spaces, this time, but of the dynamics, the rules that describe the trajectories that systems follow through the space. From a physical point of view, it is far more worrisome than anything that has preceded. For, it does much more than present difficulties to someone trying to provide an interpretation of the theory, it seems to point either to a logical inconsistency in the theory's foundations.
Suppose that we have a system S and a device S* which measures an observable A on S with values {a1, a2, a3...}. Then there is some state of S* (the ‘ground state’), and some observable B with values {b1, b2, b3...} pertaining to S* (its ‘pointer observable’, so called because it is whatever plays the role of the pointer on a dial on the front of a schematic measuring instrument in registering the result of the experiment), which are such that, if S* is started in its ground state and interacts in an appropriate way with S, and if the value of A immediately before the interaction is a1, then B's value immediately thereafter is b1. If, however, A's value immediately before the interaction is a2, then B's value afterwards is b2; if the value of A immediately before the interaction is a3, then B's value immediately after is b3, and so on. That is just what it means to say that S* measures A. So, if we represent the joint, partial state of S and S* (just the part of it which specifies the value of [A on S & B on S*], the observable whose values correspond to joint assignments of values to the measured observable on S and the pointer observable on S*) by the vector |A=ai>s|B=b i>s*, and let "→" stand in for the dynamical description of the interaction between the two, to say that S* is a measuring instrument for A is to say that the dynamical laws entail that,
|A=a1>s|B=ground state>s* → |A=a1>s|B=b1> s*Intuitively, S* is a measuring instrument for an observable A just in case there is some observable feature of S* (it doesn't matter what, just something whose values can be ascertained by looking at the device), which is correlated with the A-values of systems fed into it in such a way that we can read those values off of S*'s observable state after the interaction. In philosophical parlance, S* is a measuring instrument for A just in case there is some observable feature of S* which tracks or indicates the A-values of systems with which it interacts in an appropriate way.|A=a2>s|B=ground state>s* → |A=a2>s|B=b2> s*
|A=a3>s|B=ground state>s* → |A=a3>s|B=b3> s*
and so on.[12]
Now, it follows from (3.1), above, that there are states of S (too many to count) which are not eigenstates of A, and if we consider what Schrödinger's equation tells us about the joint evolution of S and S* when S is started out in one of these, we find that the state of the pair after interaction is a superposition of eigenstates of [A on S & B on S*]. It doesn't matter what observable on S is being measured, and it doesn't matter what particular superposition S starts out in; when it is fed into a measuring instrument for that observable, if the interaction is correctly described by Schrödinger's equation, it follows just from the linearity of the U in that equation, the operator that effects the transformation from the earlier to the later state of the pair, that the joint state of S and the apparatus after the interaction is a superposition of eigenstates of this observable on the joint system.
Suppose, for example, that we start S* in its ground state, and S in the state
1/√2|A=a1>s| + 1/√2|A=a2>sIt is a consequence of the rules for obtaining the state-space of the composite system that the combined state of the pair is
1/√2|A=a1>s|B=ground state>s* + 1/√2|A=a2>s|B=ground state>s*and it follows from the fact that S* is a measuring instrument for A, and the linearity of U that their combined state after interaction, is
1/√2|A=a1>s|B= b1>s* + 1/√2|A=a2>s|B= b2>s*This, however, is inconsistent with the dynamical rule for contexts of type 2, for the dynamical rule for contexts of type 2 (and if there are any such contexts, this is one) entails that the state of the pair after interaction is either
|A=a1>s|B=b1> s*Indeed, it entails that there is a precise probability of 1/2 that it will end up in the former, and a probability of 1/2 that it will end up in the latter.or
|A=a2>s|B=b2> s*
We can try to restore logical consistency by giving up the dynamical rule for contexts of type 2 (or, what amounts to the same thing, by denying that there are any such contexts), but then we have the problem of consistency with experience. For it was no mere blunder that that rule was included in the theory; we know what a system looks like when it is in an eigenstate of a given observable, and we know from looking that the measuring apparatus after measurement is in an eigenstate of the pointer observable. And so we know from the outset that if a theory tells us something else about the post-measurement states of measuring apparatuses, whatever that something else is, it is wrong.
That, in a nutshell, is the Measurement Problem in quantum mechanics; any interpretation of the theory, any detailed story about what the world is like according to quantum mechanics, and in particular those bits of the world in which measurements are going on, has to grapple with it.
Loose Ends
Mixed states are weighted sums of pure states, and they can be used to represent the states of ensembles whose components are in different pure states, or states of individual systems about which we have only partial knowledge. In the first case, the weight attached to a given pure state reflects the size of the component of the ensemble which is in that state (and hence the objective probability that an arbitrary member of the ensemble is); in the second case, they reflect the epistemic probability that the system in question to which the state is assigned is in that state.If we don't want to lose the distinction between pure and mixed states, we need a way of representing the weighted sum of a set of pure states (equivalently, of the probability functions associated with them) that is different from adding the (suitably weighted) vectors that represent them, and that means that we need either an alternative way of representing mixed states, or a uniform way of representing both pure and mixed states that preserves the distinction between them. There is a kind of operator in Hilbert spaces, called a density operator, that serves well in the latter capacity, and it turns out not to be hard to restate everything that has been said about state vectors in terms of density operators. So, even though it is common to speak as though pure states are represented by vectors, the official rule is that states – pure and mixed, alike - are represented in quantum mechanics by density operators.
Although mixed states can, as I said, be used to represent our ignorance of the states of systems that are actually in one or another pure state, and although this has seemed to many to be an adequate way of interpreting mixtures in classical contexts, there are serious obstacles to applying it generally to quantum mechanical mixtures. These are left for detailed discussion in the other entries on quantum mechanics in the Encyclopedia.
Everything that has been said about observables, strictly speaking, applies only to the case in which the values of the observable form a discrete set; the mathematical niceties that are needed to generalize it to the case of continuous observables are complicated, and raise problems of a more technical nature. These, too, are best left for detailed discussion.
This should be all the initial preparation one needs to approach the philosophical discussion of quantum mechanics, but it is only a first step. The more one learns about the relationships among and between vectors and operators in Hilbert space, about how the spaces of simple systems relate to those of complex ones, and about the equation which describes how state-vectors move through the space, the better will be one's appreciation of both the nature and the difficulty of the problems associated with the theory. The funny backwards thing about quantum mechanics, the thing that makes it endlessly absorbing to a philosopher, is that the more one learns, the harder the problems get.











