GOOGLE SNIPER

Monday, June 15, 2015

Alpine railroad tunnels Switzerland

Switzerland’s government-owned, 3,100-mile (5,000-kilometer) railroad network is world renowned for its efficiency, despite the difficulties imposed by the mountainous terrain. Two of the four major rail links that pass through the small, landlocked country to connect northern Europe and Italy cross the 13,000-foot-high (4,000-meter) Swiss Alps. That access was made possible only by the remarkable engineering feats embodied in the construction, between 1872 and 1922, of the St. Gotthard, Simplon, and Lötschberg Tunnels, drilled through the rock thousands of feet underground. However, the Swiss were not the first to conquer the mountains.
The earliest European alpine railroad tunnel, the Frejus Tunnel, was drilled through Mont Cenis to connect Bardonecchia in the Italian province of Savoy (north of the Alps), through Switzerland, with Modena on the Italian peninsula. King Carlo Alberta of Sardinia championed the scheme in 1845, and his successor Victor Emmanuel II took it up in 1849. Drilling did not begin on the 8-mile (13-kilometer) double-track tunnel—over twice the length of any before attempted—until late 1857, supervised by the engineer Germain Sommeiller (1815–1871), assisted by Sebastiano Grandis and Severino Grattoni. Sommeiller patented the first industrial pneumatic drill, which greatly expedited the work. Finished in 1870, the tunnel was opened, in 1871, just two months after his death.
The following year, work began on a 100-mile (160-kilometer) railroad, the Gotthardbahn, which crossed the Lepontine Alps in south-central Switzerland to link Zurich, at the heart of the country’s northern commercial centers, with Chiasso at the Italian frontier. Before then the way across the Alps, used for 800 years, was over the 6,935-foot (2,114-meter) St. Gotthard Pass. A road was built in the 1820s. Alfred Escher the founder of Credit Suisse, was the initiator of the Gotthardbahn, and as its president, with Emil Welti he negotiated German and Italian cooperation for the project in 1869–1871. Two feeder lines meet at Arth-Goldau; from there the mountain section runs through Brunner, Fluelen, and Altdorf to Erstfeld. There it commences the steep climb to Goeshenen at the northern end of the St. Gotthard Tunnel. Designed by the Geneva engineer Louis Favre, the double-track tunnel is 9.25 miles (15 kilometers) long, passing through the mountain 5,500 feet (1,700 meters) below the surface. The southern ramp is even steeper, and at Giornico more loops take the line to Chiasso. The tunnel was drilled from both ends, and the bores joined in 1880. The railroad was opened in 1882, when the difficult approach lines were completed. Favre had accepted punishingly tight schedules for the contract. He drove his force of 4,000 immigrant laborers to cut almost 18 feet (5.4 meters) a day—over twice that achieved in the Frejus Tunnel—in horrifying working conditions: water inrushes, rock falls, dust, and (because of the great depth) temperatures up to 102°F (39°C). About 1,000 men suffered serious injury; 310 were killed.
Twenty years later, the safety record on the Simplon Tunnel, although far from perfect, was much better. From the thirteenth century, the 6,590-foot (2,009-meter) Simplon Pass near the Swiss-Italian border was a key to trade between northern and southern Europe; and in the beginning of the nineteenth century, probably for military reasons, Napoléon I ordered a road built over it. Begun around 1898, the Simplon Railroad connects the Swiss town of Brig with Iselle, Italy. Its 12.3-mile (19.8-kilometer) tunnel—in reality two tunnels—under Monte Leone was conceived as a twin-tube single-track system by the German engineer Alfred Brandt; separate galleries 55 feet (17 meters) apart were linked with cross-hatches. Until the completion of Japan’s Seikan Tunnel in 1988, the Simplon Tunnel was the world’s longest railroad tunnel. Because of its depth—up to 7,000 feet (2,140 meters) below ground—temperatures exceeding 120°F (49°C) were faced during construction. The first gallery, Simplon I, was completed by January 1905 and traffic commenced the following year. Various problems, including the intervention

Friday, June 12, 2015

BART (Bay Area Rapid Transit) San Francisco, California

BART (Bay Area Rapid Transit) is a 95-mile (152-kilometer) automated rapid-transit system, the first of the “new generation” of such systems in the United States. By the end of the twentieth century there were thirteen in operation, including Washington, D.C. (opened 1976), Atlanta (1979), and Miami (1986). BART has thirty-nine stations on five lines radiating out from San Francisco to serve Contra Costa and Alameda Counties in the eastern Bay Area of northern California.
In 1947 a joint Army-Navy review board, predicted that another connecting link between San Francisco and Oakland would be needed to prevent intolerable traffic congestion on the Bay Bridge. It proposed the construction of a tube to carry high-speed electric trains under the waters of the bay. Four years later the California State Legislature created the San Francisco Bay Area Rapid Transit Commission and charged it with finding a long-term transportation solution in the context of environmental problems, not least among them the danger from earthquakes. After six years of investigation, the commission concluded that any transportation plan would have to be part of a total regional development plan. Because no such plan existed, the commission prepared a coordinated master strategy, later adopted by the Association of Bay Area Governments.
The commission’s most economical transportation solution was to establish a five-county rapid-transit
district, with the task of building and operating a high-speed rapid rail network linking major commercial centers with suburban nodes. The San Francisco BART District was formed, comprising the counties Alameda, Contra Costa, Marin, San Francisco, and San Mateo. Plans were made for a revolutionary rapid-transit system. Electric trains would run on grade-separated corridors at maximum speeds of 80 mph (128 kph) and averaging around 45 mph (72 kph). Sophisticated, well-appointed vehicles would compete with private automobiles in the Bay Area, and well-designed, conveniently located stations would be built.
By mid-1961, after extensive public consultation, the final plan was submitted to the five counties for approval. San Mateo County was unconvinced and withdrew from the scheme in December. Marin County also withdrew a few months later, not only because it could not sustain its share of the cost but also because there were questions about the feasibility of running trains across the Golden Gate Bridge. The original proposal was therefore revised as a three-county plan, providing links across the bay between San Francisco and Contra Costa and Alameda. Those counties accepted the BART Composite Report in July 1962.
As part of the following November’s general election, voters approved a $792 million bond issue to finance the high-speed transit system and to rebuild 3.5 miles (5.6 kilometers) of the San Francisco Municipal Railway. The estimated $133 million cost of the Transbay Tube was to be funded by Bay Bridge tolls. The rolling stock, which would run on 1,000-volt direct current, was estimated to cost another $71 million, and the total cost of the system was projected at $996 million—the largest public works project ever undertaken by local residents in the United States. There were to be many delays, and costs would inevitably rise, eventually totaling $1.62 billion.
Parsons-Brinckerhoff-Tudor-Bechtel was the consortium appointed to manage the project, consisting of Parsons-Brinckerhoff-Quade and Douglas (the New York originators of the first plan); and from San Francisco, Tudor Engineering Company and the Bechtel Corporation, BART construction began on 19 June 1964, on the Diablo Test Track in Contra Costa County; completed ten months later, it was used to develop and test the vehicular system.
The Oakland subway was commenced in January 1966. In the following November the first of the fifty-seven, 24-foot-high-by-28-foot-wide (7.4-by-14.8-meter) steel-and-concrete sections of the Transbay Tube, almost 4 miles (6.4 kilometers) long in total, was submerged in the bay. A 3-mile-long (4.8-kilometer) drilled rock tunnel through the Berkeley Hills was completed four months later. The Transbay Tube structure was completed in August 1969. Lying as much as 135 feet. (41.3 meters) underwater, it took six years to design (seismic studies were an integral part of the process), and under three to build. The tunnel (indeed the entire BART system) would survive intact the Loma Prieta earthquake of 1989. The final cost of the tunnel was $180 million. Before the tube was closed to visitors so the rail tracks could be installed, thousands of pedestrians passed through.
In July 1967 construction began on the two-level Market Street subway, 100 feet (30.6 meters) below San Francisco. The work was complicated by a difficult mud-and-water environment and the century-old network of underground utilities. The first tunneling on the west coast was carried out entirely under compressed-air conditions; this section of the project brought the BART workforce to 5,000 in 1969. On 27 January 1971 the bore into the west end of Montgomery Street Station marked the completion of that phase of the project.
Although delays and inflation were eroding capital, public and governmental pressure groups forced the relocation of 15 miles of line and 15 stations, and a general improvement of station designs. They were also substantially altered during construction to improve access. Discussion of BART’s financial problems is not the purpose of this essay: suffice it to say that an increasing input of federal money was needed to support the constant variations and improvements to the original plan. BART’s linear park was constructed to demonstrate how functionality need not spoil the amenity of the environment, and major landscaping was partly funded by federal money.
When the first 250 vehicles were eventually ordered from Rohr Industries of California, the price
had reached $80 million—$18 million above the estimate for the whole 450-car fleet. The first car was delivered in August 1970, and within months, 10 test cars operated on the Fremont Line. The paid service began operation on 11 September 1972 on the 28 miles (45 kilometers) between Fremont and MacArthur Stations. Heavily subsidized by federal grants, 200 more cars were bought by July 1975. In the late 1980s, BART purchased another 150 from SOFERVAL, an American subsidiary of Alsthom Atlantique of France, and 80 more from Morrison-Knudsen a few years later.
A central control room, installed in 1972 in the Lake Merritt Administration Building, was replaced in 1979 by an Operations Control Center, from which train operations and remote control of electrification, ventilation, and emergency-response systems are supervised.
In 1991, the BART Extensions Program launched a $2.6 billion plan to expand services in Alameda, Contra Costa, and San Mateo Counties. Since then 5 stations and 21 miles (33 kilometers) of double track have been added, including the Pittsburg-Antioch Extension, whose North Concord/Martinez Station opened in December 1995, the first new one in over 20 years. The $517 million Dublin/Pleasanton Extension opened in May 1997. A proposal to connect BART to San Francisco International Airport (SFO) was first considered in 1972, just as the inaugural service was opened. The first stage opened in February 1996. During the next phase, BART will move further down the San Francisco peninsula, adding 9 miles (14.4 kilometers) of track and 4 new stations, including one inside the new International Terminal. Work on the final leg started in 1997, and the line was scheduled for completion early in the twenty-first century. In 1995, BART launched a ten-year program, costing $1.1 billion, to overhaul the system infrastructure and the original fleet of cars.
Further reading
Anderson, Robert M., et al. 1980. Divided Loyalties: Whistle-Blowing at BART. West Lafayette, IN: Purdue University Press.
Grant, Howard. 1976. Notes from Underground: An Architect’s View of BART. San Francisco: Reid and Tarics Associates.
Grefe, Richard, and Richard Smart. 1976. A History of the Key Decisions in the Development of Bay Area Rapid Transit. San Francisco: McDonald and Smart

Banaue rice terraces Ifugao Province, Philippines

In the Banaue municipality of the northern Ifugao Province on the Philippine island of Luzon, the indigenous Igorot people have constructed 49,500 acres (20,000 hectares) of agricultural land upon the inhospitable
bedrock of the steep Cordillera Central Mountain Range. For millennia, succeeding generations of farmers built and maintained 12,500 miles (20,000 kilometers) of dikes and retaining walls—enough to stretch halfway around the equator—creating a unique, irregular patchwork of terraced rice paddies. The American anthropologist Roy Barton called these terraces and others in the region “a modification by man of the earth’s surface on a scale unparalleled elsewhere.”
The Cordillera rice terraces were added to UNESCO’s World Heritage List in December 1995, a decision justified in the following terms: “The fruit of knowledge passed on from one generation to the next, of sacred traditions and a delicate social balance, they helped form a landscape of great beauty that expresses conquered and conserved harmony between humankind and the environment.” Moreover, they were cited as “outstanding examples of living cultural landscapes.”
The tiers rise to about 4,900 feet (1,500 meters) above sea level. Each is defined by a stone or clay retaining wall, snaking along the contours of the steep mountainside. Stone walls are up to 50 feet high (15 meters): some of the clay walls are more than 80 feet (25.5 meters) high. Some garden terraces have been backfilled with soil, ash, and composted vegetable material, while others have been simply carved from the rock and overlaid with soil washed down from the higher levels. Rice cannot be grown without large quantities of water, and the terraces are served by an elaborate irrigation system, comprising canals cut through the rock and bamboo and wooden aqueducts. Once the highest terraces are flooded, water spills over the descending walls until the whole hillside is irrigated.
What of their builders? Igorot (literally, “the mountain people”) is a broad ethnic classification applied to a number of groups bound by common sociocultural and religious characteristics—Ibaloy, Kankanay, Ifugao, Kalinga, Apayao, and Bontoc—who occupy the Cordilleras. They originate from the warlike immigrants who reached the northern islands of the Philippines from Vietnam and China, some scholars believe 10,000 years ago. Their descendants eventually became rice farmers and, against the difficulties presented by the hostile topography, built their amazing tiers of rice fields on the precipitous mountainsides. The true age of the terraces remains in question: some sources suggest that the Igorot commenced them between 200 b.c. and a.d. 100, others that they date from at least 1000 b.c. As late as the 1990s rising nationalism had not permeated their tribal highlands, and the Igorots, while regarded as citizens, did not think of themselves as Filipinos. They were further alienated by the Marcos administration’s dam-building schemes, which included flooding the mountain valleys in their Cordillera homelands. They continue to resist integration into Filipino society.
The rice culture of the Igorot, central to their way of life, inevitably had a spiritual dimension. As Joaquin Palencia remarks, “the adversarial nature of the geography of this region and the tremendous
odds faced by the Ifugao to assure access to food … set the stage for the bul-ul, the rice god figures that came to be a mechanism through which superhuman restraint became central to the production of a basic need.” Indeed, the Igorot embrace no fewer than 1,500 gods, each type fulfilling a different function. The bul-ul is a large-headed, seated or standing humanoid figure, ritually carved, usually from sacred narra wood. The sizes of the rice field and its guardian bul-ul are directly related: the Banaue terraces have large, thickset bul-ul. Once the ceremonies and feasts are completed, the figure is installed in a granary in the attic of a house, from which it is believed to protect crops and ensure abundant harvests. But the forces threatening the Banaue rice terraces, and others like them at Hungduan, Kiangan, Mayoyao, and Bontok, are other than spiritual.
Rice farming is labor intensive—and hard labor at that—and yields low financial returns. The main threat to the terraces is the departure of young Ifugaos, who seek better work opportunities in the cities. Water shortage is also a problem: the lack of rain in the dry season is exacerbated by systematic deforestation and illegal logging. Because of such poor forestry management there is no longer enough water for irrigation, and recent harvests have been unable to sustain even the terrace owners, much less provide a cash crop. Moreover, with only one crop a year, mountainside farming compares poorly with the lowland paddies, where there are two. Many Ifugao farmers, encouraged by the Rice Terraces Commission (RTC), established under President Fidel Ramos in 1994, are now planting vegetables that can be harvested after six weeks, a quarter of the time needed for rice.
In 1998, when these combined problems were exacerbated by accelerated erosion caused by introduced giant earthworms, the RTC introduced a plan to maintain the terraces, focused on the preservation of Ifugao culture, diversification of the regional economic base, and the application of appropriate current agricultural technology. Its success has yet to be proven. Promoted by the Philippines tourism authority as “the eighth wonder of the world,” the Banaue rice terraces are among the country’s major attractions. Already under threat from cultural change, neglect, and inadequate irrigation, if they are not maintained they will be in ruins within a couple of decades.
Further reading
Barton, Roy Franklin. 1922. Ifugao Economics. Berkeley: University of California Press.
Palencia, Joaquin C. 1998. The Ifugao Bul-nl. Tribal Arts Online. May. http://www.tribalarts.com.
Reyes, Angelo J., and de los Aloma, M., eds. 1987. Igorot: A People Who Daily Touch the Earth and the Sky. Baguio, Philippines: CSC.
Scott, William Henry. 1966. On the Cordillera: A Look at the Peoples and Cultures of the Mountain Province. Manila: MCS Enterprises.

B Babylon: Nebuchadnezzar’s city Iraq

The city of Babylon (“Gate of God”) once stood on the banks of the Euphrates River, 56 miles (90 kilometers) south of Baghdad, Iraq. It was the capital of Babylonia in the second and first millennia b.c. In a.d. 1897 the German archeologist Robert Koldewey commenced a major excavation. During the next twenty years he unearthed, among many other structures, a processional avenue to the temple of Marduk and the legendary fortified city wall, which once enjoyed a place among the seven wonders of the ancient world. It was not until the sixth century a.d. that its place was usurped by the so-called Hanging Gardens.
Babylon entered the pages of history as the site of a temple around 2200 b.c. At first it was subject to Ur, an adjacent city-state, but gained its independence in 1894 b.c., when the Sumu-abum established the dynasty that reached its zenith under Hammurabi, known as “the Lawgiver.” The Hittites overran the city 330 years later. It was governed by the Kassite dynasty, which extended its borders and made it the capital of the country of Babylonia, with southern Mesopotamia under its control. When the Kassites yielded to pressure from the Elamites in 1155 b.c., Babylon was governed by a succession of ephemeral dynasties and became part of the Assyrian Empire in the late eighth century b.c. In turn, the Assyrians were driven out by Nabopolassar, who founded the Neo-Babylonian dynasty around 615 b.c. His son Nebuchadnezzar II (ca. 604–561 b.c.) built the kingdom into an empire that covered most of southwest Asia.
Babylon, now Nebuchadnezzar’s imperial capital, underwent a huge rebuilding program—new temples and palace buildings, defensive walls and gates, and a splendid processional way—to make it the largest city in the known world, covering some 2,500 acres (1,000 hectares). It must have impressed visitors, because the myth sprang up, perhaps from the assertion of the Greek historian Herodotus, that it was 200 square miles (510 square kilometers) in area, with 330-foot-high (99-meter) walls, 80 feet (25 meters) thick. Of his achievement, Nebuchadnezzar boasted, “Is not this great Babylon that I have built for the house of my kingdom by the might of my power, and for the honor of my majesty?”
The Euphrates River divided the city into two unequal sectors. The “old” quarter, including most of the palaces and temples, stood on the east bank; Nebuchadnezzar’s new city was on the west. The whole was surrounded by an 11-mile-long (17-kilometer) outer wall enclosing suburbs and the king’s summer palace. The inner wall, penetrated by eight fortified gates leading to the outlying regions of Babylonia, was wide enough to allow two chariots to be driven abreast on its top. Most prominent among
the portals was the northern Ishtar Gate, dedicated to the queen of heaven: a defensible turreted building with double towers and a barbican, faced with blue glazed brick and richly ornamented with 500 bulls, dragons, and other animals in colored brick relief.
Through the Ishtar Gate passed the north-south processional way, which ran past the royal palace and was used in the New Year festival. It was paved with limestone slabs, about 3.5 feet (1 meter) square; the flanking footpaths were of breccia stones about 2 feet (600 millimeters) square. Joints were beveled and the gaps filled with asphalt. The road was contained by 27-foot-thick (8-meter) turreted walls, behind which citadels were strategically placed. The faces of the walls were decorated with lions in low relief. Much of the significance of the road lies in the exotic and doubtless expensive materials employed. The land between the rivers had little naturally occurring stone, and except for their faces, the city walls and gatehouses and even the king’s palace were constructed of sun-dried brick.
Inside the Ishtar Gate, at the northwest corner of the old city, stood Nebuchadnezzar’s extensive palace with its huge throne room, and the fabled Hanging Gardens of Babylon. It is more likely that they were “overhanging” gardens. Described by one first century b.c. visitor as “vaulted terraces raised one above another,” they were irrigated with water pumped from the Euphrates. Another early description says that this 400-foot-square (122-meter) artificial mountain was more than 80 feet (25 meters) high and built of stone. It was planted with all manner of vegetation, including large trees. There is a romantic legend that the Hanging Gardens were built for Nebuchadnezzar’s wife, Amytis, a Mede who missed the green mountains of her motherland. Beside the palace stood the rebuilt temple of the city’s patron god, Marduk, replete with gold ornament. In a sacred precinct north of the temple stood a seven-story ziggurat (stepped pyramid); some descriptions put its height at 300 feet (90 meters).
Nebuchadnezzar was Babylon’s last great ruler. Because his successors were comparatively weak, the Neo-Babylonian Empire quickly passed. In 539 b.c. the Persian Cyrus II took the city by stealth, over-threw Nebuchadnezzar’s grandson Belshazzar, and subsumed Babylon into his empire. The city became the official residence of the crown prince, but following a revolt in 482 b.c., Xerxes I demolished the temples and ziggurat, thoroughly destroying the statue of Marduk. Alexander the Great captured the city in 330 b.c. but he died before be could carry out his intention to refurbish it as the capital of his empire. For a few years after 312 b.c., the Seleucid dynasty used Babylon as a capital until the seat of government was moved (with most of the population) to the new city of Seleucia on the Tigris. Babylon the Great became insignificant, and by the foundation of Islam in the seventh century a.d., it had almost disappeared.
Now Babylon is being rebuilt. In April 1989 the New York Times International reported that, under Iraqi President Saddam Hussein, “walls of yellow brick, 40 feet [12 meters] high and topped with pointed crenellations, have replaced the mounds that once marked [Nebuchadnezzar’s] Palace foundations. And as Babylon’s walls rise again, the builders insert inscribed bricks recording how [it] was ‘rebuilt in the era of the leader Saddam Hussein.’” An annual International Babylon Festival—one was subtitled “From Nebuchadnezzar to Saddam Hussein”—is part of the megalomaniac dictator’s projection of himself as the ancient king’s successor. Portraits of the two hang side by side on a restored wall in Babylon.
See also
Hanging Gardens of Babylon
Further reading
Reade, Julian. 1991. Mesopotamia. Cambridge, MA: Harvard University Press.
Saggs, Henry William Frederick. 1988. The Greatness That Was Babylon. London: Sidgwick and Jackson.
Service, Pamela F. 1998. Ancient Mesopotamia. New York: Benchmark Books.

Avebury Stone Circle England

The Avebury Stone Circle, covering around 28 acres (11 hectares), is the largest known stone circle in the world. It partly embraces the linear village of Avebury, 90 miles (145 kilometers) west of London in a part of England that is replete with prehistoric remains: Silbury Hill: the Sanctuary; and the long barrows of East Kennet, West Kennet, and Beckhampton. John Aubury, who accidentally discovered if while foxhunting in the winter of 1648, wrote that Avebury “does as much exceed in greatness the so renowned Stonehenge as a Cathedral doeth a parish Church.” Indeed, it is sixteen times the size of Stonehenge.
When the Avebury circle was intact, its complex, if rather irregular, geometry comprised a 30-foot-deep (9.2-meter) ditch inside a 20-foot-high (6-meter) grass-covered chalk bank 1,396 feet (427 meters) in diameter. One observer describes it as “a curiously amorphous ‘D’ shape.” The ditch, possibly once filled with water, enclosed an outer circle of about 100 enormous, irregular standing stones that varied in height from 9 to 20 feet (2.7 to 6 meters). Within the large circle, there were two inner circles, each about 340 feet (104 meters) in diameter. The northern one (now largely destroyed) seems to have comprised two concentric rings, one of twenty-seven stones and one of twelve; at their center stood three larger stones. The southern circle had a single 20-foot-high (6-meter) stone at its center. The inmost circles are thought to have been set up about 2600 b.c.; the outer ring and enclosing earthworks have been dated at a century later.
Its construction called for colossal effort on the part of the builders. The standing stones were quarried and dressed 2 miles (3.2 kilometers) from their final position, dragged or perhaps sledded to the site—some weighed 45 tons (41 tonnes)—and set upright. The excavation of the vast surrounding ditch with rudimentary stone tools yielded an estimated
200,000 tons (203,200 tonnes) of spoil, mostly chalk stone. Some of the spare material may have been carried 1 mile (1.6 kilometers) to construct the mysterious 130-foot-high (39.6-meter) chalk mound known as Silbury Hill, just outside the village of Avebury. Many of the stones are now missing, possibly “quarried” by farmers or cleared for agricultural and even religious reasons since about the fourteenth century a.d., when villagers actually buried some of them. Only 36 of the original 154 megaliths remain standing.
The outer circle was broken to form four 50-foot-wide (15.3-meter) entrances, facing approximately north, south, east, and west. Two were the terminations of avenues of the same width, defined by standing stones and extending up to 1.5 miles (2.5 kilometers) across the surrounding countryside. According to the eighteenth-century antiquarian William Stukeley, the so-called West Kennet Avenue ran south to the Sanctuary, another stone circle on Overton Hill; the one named Beckhampton Avenue ran west to end at the neolithic tomb known as Beckhampton Long Barrow. Stukeley’s measured drawings, made before 1743, are the only surviving record of the former condition of the site. He interpreted the ground plan of Avebury as the body of a serpent passing through a circle—a traditional alchemical symbol—and whose head and tail were marked by the avenues.
Recent investigations have led some scholars to speculate that the Avebury circle was part of a network of sacred places that stretched 200 miles (360 kilometers) across southern England. Similar to Stonehenge and many other megalithic monuments in Britain, the Avebury Stone Circle formed part of at least a local complex of megalithic works. The whole complex probably continued to be used for around 2,300 years. That persistence and the very size of the Avebury Stone Circle give weight to the suggestion that it was “perhaps the most significant sacred site in all of Britain, if not the entire continent of Europe.” The renaissance of paganism in the West at the end of the twentieth century excited new interest in its elusive mysteries.
Further reading
Burl, Aubrey, and Edward Piper. 1980. Rings of Stone: The Prehistoric Stone Circles of Britain and Ireland. New Haven, CT: Ticknor and Fields.
Grundy, Alan H. 1994. Britain’s Prehistoric Achievements. Lewes, UK: Book Guild.
Hadingham, Evan. 1975. Circles and Standing Stones. London: Heinemann

Aswan High Dam Egypt

The Aswan High Dam, replacing earlier dams, contains the River Nile nearly 600 miles (1,000 kilometers) upstream from Cairo by a massive embankment 375 feet (114 meters) high and 3,280 feet (1,003 meters) long, built of earth and rock fill with a clay and concrete core. It impounds Lake Nasser, one of the largest reservoirs in the world, covering an area more than 300 miles (480 kilometers) long and 10 miles (16 kilometers) wide, that holds enough water to irrigate over 7 million acres (2.8 million hectares) of farmland for many years. Its economic and social impact on the lower reaches of the Nile (that is, in the north of Egypt) makes it an engineering feat of some importance, although not necessarily always beneficial.
The annual flooding of the Nile has been the historical life source of Egypt, in what is almost a rainless region. Almost all the population lives within 12 miles (20 kilometers) of the river. The flooding—13 billion to 169 billion cubic yards (12 billion to 155 billion cubic meters)—is caused by late-summer rains on Ethiopia’s plateaus that find their way into the Nile’s tributaries. Late in the nineteenth century, regional population growth was outstripping agricultural production, and the river had to be controlled to recover stability. The first Aswan Dam was built from 1899 to 1902 and raised in 1907–1912 and again in 1929–1934. When its potential to generate
the power needed in industrializing economies was realized, hydroelectric installations were added in 1960. But its inadequate storage capacity meant that in years of extreme flooding, sluices would have to be opened to protect the structure, thereby, ironically, inundating the very areas the dam was meant to protect. Designs for the High Dam, 4 miles (6.4 kilometers) south of the existing structure, were put in hand in 1952. Egypt and Sudan signed the Nile Water Agreement in November 1959.
President Gamal Abel el Nasser initiated the Aswan High Dam project. Because of his connections with the Communist bloc, the United States and Britain refused to secure loans, so the Soviet Union provided civil engineers to design the earthen dam and supplied the equipment and 400 technicians and electrical engineers to build the hydroelectric power station. Moreover, almost a third of the estimated U.S.$1 billion cost was met by the Soviet Union, and the remainder was funded by Egypt’s controversial nationalization of the Suez Canal. Construction, commenced in 1960, was complete by mid-1968. The last of the twelve turbines was installed in 1970, and President Anwar Sadat officially inaugurated the High Dam.
Despite causing some ecological problems, the dam brought good outcomes for the people of the Nile valley. For example, with a regulated agricultural system in place, through multiple cropping the nation’s agricultural income has increased by 200 percent. For all that, the poor drainage of the newly irrigated lands has led to increased salinity, and more than half of Egypt’s arable soils are now medium to poor in quality. Village communities were provided with water and electricity. A fishing industry was established, with an annual production target of 112,000 tons (101,600 tonnes) by the year 2000. The hydroelectric scheme generates 2,100 megawatts, about half of Egypt’s annual needs at the time of construction; demand has since increased, and the Aswan High Dam now provides a little over 20 percent. Egypt was untouched by the drought over much of Africa in the late 1980s, and during the following decade the land was saved from several unusually high floods. In 1996 Lake Nasser rose above the spill level for the first time, and plans are in hand to open up more irrigated farmland, even recovering parts of the Sahara Desert.
Nevertheless, the construction of the Aswan High Dam has caused some problems, mostly as cumulative effects of impeding the flow of the river. Farmers have had to turn to artificial fertilizers to replace the nutrients that no longer reach the floodplain. Before the High Dam was built, half the water flowing in the Nile reached the Mediterranean; the millions of tons of silt it once carried to the sea are now mostly trapped behind the dam; consequently the ocean is eroding the coastline. The now-stagnant waters in the Delta destroyed long-standing ecological systems, and the loss of nutrients from the river has drastically damaged the sardine industry in the Mediterranean: 20,000 tons (18,288 tonnes) were caught in 1962; that figure was reduced to only 670 tons (609 tonnes) in 1969. Other branches of the Mediterranean fishing industry were similarly affected. Thankfully, for whatever reasons, there has since been a complete recovery.
The Aswan High Dam also had social and cultural implications. Not least traumatic of these was the displacement of population. More than 90,000 Nubians had to be relocated; those living in Egypt were moved about 28 miles (45 kilometers), but Sudanese Nubians had to move to new homes 370 miles (600 kilometers) away. Much of Lower Nubia was submerged under Lake Nasser, including archeological sites between the First and Third Cataracts of the Nile. At the urging of UNESCO, a rescue program named the Nubia Salvage Project was started in 1960 by the Oriental Institute, University of Chicago. As a result, twenty monuments from the Egyptian part of Nubia (including the front of the rock-hewn tomb at Abu Simbel) and four others from the Sudan were dismantled, relocated, and reerected. Others were documented before their inundation, but some were lost forever without being recorded.
Further reading
Fahim Hussein M. 1981. Dams, People, and Development: The Aswan High Dam Case. New York: Pergamon.
Little, Tom, 1965. High Dam at Aswan: The Subjugation of the Nile. New York: John Day.
Shibl, Yusuf A. 1971. The Aswan High Dam. Beirut: Arab Institute for Research and Publishing.

Thursday, June 11, 2015

Artemiseion Ephesus, Turkey

The Artemiseion, a huge Ionic temple dedicated to the goddess Artemis, stood in the city of Ephesus on the Aegean coast of what was then Asia, near the modern town of Selcuk, about 30 miles (50 kilometers) south of Izmir, Turkey. The splendid building was acclaimed as one of the seven wonders of the world, as attested by Antipater of Sidon: “When I saw the sacred house of Artemis 1/4 the [other wonders] were placed in the shade, for the Sun himself has never looked upon its equal outside Olympus.” Among several attempts to identify the architectural and sculptural wonders of the ancient world, the seven best known are those listed by Antipater in the second century b.c. and confirmed soon after by one Philo of Byzantium.Artemis was the Greek moon goddess, daughter of Zeus and Leto. Whatever form she was given, it was always linked with wild nature. On the Greek mainland she was usually portrayed as a beautiful young virgin, a goddess in human form. In Ephesus and the other Ionic colonies of Asia, where ancient ideas of the Earth Mother and associated fertility cults persisted, she was linked with Cybele, the mother goddess of Anatolia, and her appearance was dramatically different, even grotesque. The original cult statue has long since disappeared, but copies survive. That is hardly surprising, because the trade in them flourished in Ephesus at least until the first century a.d. They portray a standing figure, her arms outstretched like those of the earlier décolleté figurines common in Minoan Crete. Artemis was fully dressed except for her many breasts, symbolizing her fertility (although some recent scholars have suggested that the bulbous forms are bulls’ scrotums). The lower part of her body was covered with a tight-fitting skirt, decorated with plant motifs and carved in relief with griffins and sphinxes. She wore a head scarf decorated
in the same way and held in place with a four-tiered cylindrical crown. Ancient sources say that the original statue was made of black stone, enriched with gold, silver, and ebony.
The Artemis shrines at Ephesus had a checkered history. The earliest was established on marshy land near the river, probably around 800 b.c.; it was later rebuilt and twice enlarged. The sanctuary housed a sacred stone—perhaps a meteorite—believed to have fallen from Zeus. By 600 b.c. Ephesus had become a major port, and in the first half of the fifth century, its citizens commissioned the Cretan architect Chersiphron and his son Metagenes to build a larger temple in stone to replace the timber structure. In 550 b.c. it also was destroyed when the Lydian king, Croesus, invaded the region. Croesus, whose name has passed into legend for his fabulous wealth, contributed generously to a new temple, the immediate predecessor to the “wonder of the world.” It was four times the area of Chersiphron’s temple, and over 100 columns supported its roof. In 356 b.c. one Herostratos, a young man “who wanted his name to go down in history,” started a fire that burned the temple to the ground.
The Ephesian architects Demetrios and Paeonios (and possibly Deinocrates) were commissioned to design a more magnificent temple, built to the same plan and on the same site. The first main difference was that the new building stood on a 9-foot-high (2.7-meter) stepped rectangular platform measuring 260 by 430 feet (80 by 130 meters), rather than a lower crepidoma like the earlier stone building. Another departure from the normally austere and reserved Greek architectural tradition was the opulence of the temple, which went beyond even its great size. Its porch (pronaos) was very deep: eight bays across and four deep. The Ionic columns towered to 48 feet (17.7 meters); each had, in place of the usual Ionic base, a 14-foot-high (3.5-meter) lower section, carved with narrative decorations in deep relief. The other difference was in the quality of the detail. The wonder of the world was decorated with bronze statues by the most famous contemporary artists, including Scopas of Paros. Their detail can only be guessed at, as can the overall appearance of the great temple. Attempts have been made at graphical reconstruction, but they vary widely in their interpretation of the sparse archeological evidence. Antipater described the Artemiseion as “towering to the clouds,” and Pliny the Elder called it a “wonderful monument of Grecian magnificence, and one that merits our genuine admiration.” Pliny also asserted that it took 120 years to build, but it may have taken only half that time. It was unfinished in 334 b.c. when Alexander the Great arrived in Ephesus.
By the time the Artemiseion was vandalized by raiding Goths in a.d. 262—it was partly rebuilt—both the city of Ephesus and Artemis-worship, once flaunted as universal, were in decline. When the Roman emperor Constantine redeveloped elements of the city in the fourth century a.d., he declined to restore the temple. By then, with most Ephesians converted to Christianity, it had lost its reason for being. In a.d. 401 it was completely torn down on the instructions of John Chrysostom. The harbor of Ephesus silted up, and the sea retreated, leaving barely habitable swamplands. As has so often happened, the ruined temple was reduced to being a quarry, and its stone sculptures were broken up to make lime for plaster. The old city of Ephesus, once the administrative center of the Roman province of Asia, was eventually deserted.
The temple site was not excavated until the nineteenth century. In 1863 the English architect John Turtle Wood set out to find the legendary building, under the auspices of the British Museum. He persisted through six expeditions and in 1869 discovered the base under 20 feet (6 meters) of mud. He ordered an excavation that exposed the whole platform. Some remains are now in the British Museum, others in the Istanbul Archeological Museum. In 1904 and 1905 another British expedition, led by David Hogarth, found evidence of the five temples, each built on top of the former. Today the site is a marshy field, a solitary column the only reminder that in that place once stood one of the seven wonders of the ancient world.
Further reading
Clayton, Peter, and Martin Price. 1988. The Seven Wonders of the Ancient World. London: Routledge.
Cox, Reg, and Neil Morris. 1996. The Seven Wonders of the Ancient World. Parsippany, NJ: Silver Burdett.

Archigram

The Archigram group was established in 1961 by a few young British architects “united by common interests and antipathies.” Its founders were Peter Cook, Michael Webb, and David Greene, who were soon joined by Dennis Crompton, Ron Herron, and Warren Chalk. Archigram’s international impact—its architectural feat, so to speak—was significant. Other architects would give form to its notions. The Centre Pompidou, Paris, by Renzo Piano and Richard Rogers, and Arata Isozaki’s buildings at the 1970 Osaka World’s Fair are redolent of the fantastic schemes drawn, but never built, by Archigram. The Austrian architect Hans Hollein, too, admits his debt to them after 1964. It is in the realm of ideas about living in an advanced industrial civilization that they offered most.
All the founders had been students at the Architecture Association school in London, where they had learned, in the face of a then-reactionary architectural profession, to apply democratic principles to the art. The members who came later assimilated those ideas and blended them with other influences, notably the futuristic urban visions of Friedrich Kiesler and Bruno Taut and the technological notions of Richard Buckminster Fuller, whom they heroized. They also formed a symbiotic intellectual association with the exactly contemporary Japanese Metabolist group, in which Isozaki was preeminent. The Japanese applauded their efforts to “dismantle the apparatus of Modern Architecture.”
Like the Dutch De Stijl group around 1920, Archigram’s cooperation was mainly through a polemical journal; and like the Hollanders, it drew its name from the title of the journal. Archigram (derived from “architecture” and “telegram” or “aerogram”) was published (almost) annually between 1961 and 1974. Archigram, more like a polemical broadsheet than a journal, directed an attack on the smugness of modernist architectural conservatism, reinforced by what can best be called Britishness. The powerful publication ran to ten annual issues, preaching an urgent message about architecture that has been described as “esthetic technocratic idealism.” Possibly the most significant architectural publication of the decade, its “pop” format, including beautifully drawn comic strips, declared the group’s “optimism and possibilities of technology and the counterculture of the pop generation.”
The 1964 issue, after a controversial “Living City” exhibition at London’s Institute of Contemporary Arts, attracted the critic Reyner Banham, who became the group’s champion. There followed a succession of (perhaps) outlandish architectural proposals. Archigram’s direction was urban, technological, autocratic—and some have said inhumane. The members believed that technology was the hope of the world, so traditional means of building houses and cities must be superseded. Their favorite words
were change, adaptability, flexibility, metamorphosis, impermanence, and ephemerality. Accordingly, they designed a living environment that incorporated all kinds of gadgetry. They proposed an inflatable bodysuit containing food, radio, and television, and the “suitaloon,” a house carried on the back. These eccentric ideas extended from the individual to the communal: Chalk’s Capsule Homes (1964) were projected alongside Cook’s Plug-in City (1964–1966), in which self-contained living units could be temporarily fitted into towering structural frames, and Herron’s nomadic Walking City; in which skyscrapers could move on giant telescoping legs. The group published its Instant City in 1968.
It has been suggested that in the 1960s Archigram was to modern architecture what the Beatles were to modern music. But in the early 1970s they more or less dispersed, Greene and Herron (for a while) becoming teachers in the United States. Crompton, Cook, and Herron formed Archigram Architects (1970–1974). Herron and Cook then established independent practices in various partnerships. Crompton maintained links with the Architectural Association, and Greene turned to writing poetry and practicing architecture. Webb moved permanently to the United States and after 1975 taught at Cornell and Columbia Universities in New York. Chalk continued writing and teaching in the United States and England, mostly at the Architectural Association, until he died in 1987.
See also
Industrialized building; Pompidou Center (Beaubourg)
Further reading
Archigram. (1961–1974). 10 issues (numbers 1-91/2). London: Archigram.
Cook, Peter, ed. 1972. Archigram. London: Studio Vista.
Crompton, Dennis ed. 1994. A Guide to Archigram, 1961–1974. London: Architectural Press.

Appian Way Italy

The Appian Way (Via Appia), the oldest and perhaps most famous Roman road, was built by the Censor Appius Claudius Caecus in 312 b.c. Enlarging a track between Rome and the Alban Hills and forming the main route to Greece and the eastern colonies, this so-called queen of roads (regina viarumeters) ran south from the Porta Capena in Rome’s Servian Wall to Capua. It passed through the Appii Forum to the coastal town of Anxur (now Terracina), 60 miles (100 kilometers) from Rome, to which point it was almost straight, despite crossing the steep Alban Hills and the swampy Pontine Marshes. In 190 b.c. it was extended to Brundisium (modern Brindisi) on the Adriatic coast—more than 350 miles (560 kilometers) from the capital and eighteen days’ march for a legion. Parts of it—now called the Via Appia Antica—remain in use after more than 2,000 years.
The medieval proverb “A thousand roads lead man forever toward Rome” was popularized in William Black’s Strange Adventures of a Phaeton (1872) as “All roads lead to Rome.” That was probably once true: the Romans built about 50,000 miles (80,000 kilometers) of paved roads throughout their empire, mainly to expedite movements of the legions. Inevitably, the system was put to wider use and eventually served all kinds of travelers: dignitaries, politicians, commercial traffic of all kinds, and even an official postal service.
Roman engineers efficiently developed road-building techniques to create enduring structures. Usually (but not always), roads were laid upon a carefully constructed embankment (agger) to provide a foundation—rubble laid in such a way as to provide proper drainage—for the base. The dimensions of the agger varied according to the importance of the road. Sometimes it may have been just a small ridge, but on major routes it could be up to 5 feet high and 50 wide (1.5 by 15 meters). For very minor roads no embankment was built, but two rows of curbstones defined the carriageway; the excavation between them was layered with stones and graded material, the topmost sometimes forming the pavement. Overall, the depth of a Roman road from the surface to the bottom of the base was up to 5 feet. It seems that road width varied according to function, importance, and topography. The widest (decumanus maximus) was 40 feet (12.2 meters) wide, while a minor road might be only 8 feet (2.4 meters). Rural thoroughfares were generally 20 feet (6 meters), but all roads became narrower over difficult terrain: some mountain passes, at less than 10 feet, were too narrow (and often too steep) for carts.
Although stone was sometimes transported from a few miles away, local material was normally used. Of course, that practice gave rise to differences in construction along the length of a road, as is evident in the Via Appia. At one place a 3-foot-thick (1-meter) bottom layer of earth and gravel from the neighboring mountains was consolidated between the curbs and covered by a thinner layer of gravel and crushed limestone, also contained by parallel rows of closely placed large stones. Elsewhere, a base layer of sand was covered with another of crushed limestone into which slabs of lava up to 15 inches (50 centimeters) thick were fixed. Stone surfaces were mandatory for urban streets after 174 b.c., but other
roads were not always stone-paved, especially in difficult terrain. Like the substructure, surfaces varied according to what materials were locally available: gravel, flint, small broken stones, iron slag, rough concrete, or sometimes fitted flat stones were used. The pavement thickness varied from a couple of inches on some roads to 2 feet (0.6 meter) at the crown of others. Surfaces sloped down—as steeply as 1 in 15—from the center, to allow rainwater runoff into flanking ditches.
Roman roads were strong enough to support half-ton metal-wheeled wagons, and many were wide enough to accommodate two chariots abreast. Some roads were provided with intentional ruts, intended to guide wagons on difficult stretches. Under normal traffic a paved Roman road lasted up to 100 years. Beginning with the Appian Way, the ancient Roman engineers flung an all-weather communication network across Italy and eventually their empire. The poet Publius Papinius Statius wrote late in the first century a.d.:
  • How is it that a journey that once took till sunset
  • Now is completed in scarcely two hours?
  • Not through the heavens, you fliers, more swiftly
  • Wing you, nor cleave you the waters, you vessels.
Further reading
Chevallier, Raymond. 1976. Roman Roads. London: Batsford.
Hagen, Victor von, and Adolfo Tomeucci. 1967. The Roads That Led to Rome. Cleveland: World Publishing.
White, K. D, 1984. Greek and Roman Technology. London: Thames and Hudson

Angkor Wat Cambodia

Angkor Wat, a temple complex dedicated to the Hindu deity Vishnu, was built in the twelfth century a.d. in the ancient city of Angkor, 192 miles (310 kilometers) northwest of Phnom Penh. It is probably the largest (and, as many have claimed, the most beautiful) religious monument ever constructed. Certainly it is the most famous of all Khmer temples.
Angkor served as the capital of the Khmer Empire of Cambodia from a.d. 802 until 1295. Evidence uncovered since 1996 has led some scholars to assert that the site may have been occupied some 300 years earlier than first thought, obviously affecting accepted chronologies. Whatever the case, its powerful kings held sway from what is now southern Vietnam to Yunnan, China, and westward from Vietnam to the Bay of Bengal. The city site was probably chosen for strategic reasons and for the agricultural potential of the region. The Khmer civilization was at its height between 879 and 1191, and as a result of several ambitious construction projects, Angkor eventually grew into a huge administrative and social center stretching north to south for 8 miles (13 kilometers) and east to west for 15 miles (24 kilometers). The population possibly reached 1 million.
Apart from the hundreds of buildings—temples, schools, hospitals, and houses—there was an extensive system of reservoirs and waterways. The public and domestic buildings, all of timber, have long since decayed. But because they were the only structures in which masonry was permitted, over 100 temple sites survive. Earlier examples were mostly of brick, but later, the porous, iron-bearing material known as laterite was used, and still later sandstone, quarried about 25 miles (40 kilometers) away.
The city of Angkor was the cult center of Devaraja, the “god-king,” and an important pilgrimage destination. The Khmer kings themselves, from Jayavarman II (802–850) onward, had come to be worshiped as gods, and the temples they built were regarded as not only earthly but also as symbols of Mount Meru, the cosmological home of the Hindu deities. The official state religion was worship of the Siva Lingam, which signified the king’s divine authority. Jayavarman II had identified the kingship with Siva, and acting upon that precedent, King Suryavarman H (1113—ca. 1150) presented himself as an incarnation of Vishnu. He built Angkor Wat as a temple and administrative center for his empire and as his own sepulcher (which is why it faces west); to celebrate his status, he dedicated it to Vishnu.
Financed by the spoils of war and taking over thirty years to finish, the sandstone-and-laterite Angkor Wat occupies a 2,800-by-3,800-foot (850-by-1,000-meter) rectangular site. Its layout provides an architectural allegory of the Hindu cosmology. The temple is surrounded by a 590-foot-wide (180-meter) moat, over 3 miles (5 kilometers) long, which represents the primordial ocean. A causeway decorated with carvings of the divine serpents leads to a 617-foot-long (188-meter) bridge that gives access to the most important of four gates. The temple is reached by passing through three galleries separated by paved walkways. It is an approximately pyramidal series of terraces and small buildings arranged in three ascending stories—they stand for the mountains that encompass the world—and surmounted at the center by a temple “mountain” of five lotus-shaped towers, symbolizing the five peaks of Mount Meru. Four of the original nine towers have succumbed to time and weather. The temple walls are replete with wonderfully crafted bas-reliefs, many of which were once painted and gilded, including about 1,700 heavenly nymphs and others that depict scenes of Khmer daily life, episodes from the epics Ramayana and Mahabharata, the exploits of Vishnu and Siva, and (of course) the heroic deeds of King Suryavarman II.
In 1177 Angkor fell to the Cham army from northern Cambodia, who held it until it was retaken early in the reign of the Khmer King Jayavarman VII (1181–ca. 1215). When he built Angkor Thom nearby he dedicated his new capital to Buddhism, and Angkor Wat became a Buddhist shrine. Many of its carvings and statues of Hindu deities were replaced by Buddhist art. The Thais sacked Angkor in 1431. The following year the Khmers abandoned the city, and it was left to the encroaching jungle for a few centuries. However, Theravada Buddhist monks kept Angkor Wat as intact as possible until the late nineteenth century, making it one of the most important pilgrimage destinations in Southeast Asia.
The French explorer Henri Mouhot “discovered” Angkor in 1860. After French imperialism imposed itself in Indochina in 1863, the site attracted the scholarly interest of westerners. In 1907, when Cambodia had been made a French protectorate and Thailand returned Angkor to its control, L’École Française d’Extreme Orient established the Angkor Conservation Board. It seems that for forty years the European colonizers were more interested in reconstructing Angkor Wat than in undertaking scholarly restoration. The prodigal use of reinforced concrete made many of the buildings unrecognizable. The vandalism was mercifully halted when Khmer Rouge
guerrillas occupied the site, followed by the Vietnamese army. When an uneasy peace was restored in 1986, the Archaeological Survey of India took up the project, replacing much of the French work with more modern and less intrusive techniques. At the invitation of the Cambodian government, the Japanese Government Team for Safeguarding Angkor began a four-year preservation and restoration project in November 1994, initially focused on the Bayon temple in Angkor Thom but extending to the outer buildings of Angkor Wat. Because of delays caused by the July 1997 conflicts in Cambodia, the program was extended into 1999.
Further reading
Fujioka, Michio Tsuenari Kazumori, and Chikao Mori. 1972. Angkor Wat. Tokyo: Kodansha International.
Mannikka, Eleanor, 1996. Angkor Wat: Time, Space, and Kingship. Honolulu: University of Hawaii Press.
Narasimhaiah, B. 1994. Angkor Wat, India’s Contribution in Conservation, 1986–1993. New Delhi: Archaeological Survey of India.

Amsterdam Central Station The Netherlands

Amsterdam Central Station is in fact geographically central in the city. Although it conformed to the general pattern of many metropolitan railroad stations before and after, it was an architectural and engineering achievement in that it was built on three artificial islands in the River IJ, supported by no fewer than 26,000 timber piles driven into the soft river bottom. That was a feat perhaps remarkable to the rest of the world but quite commonplace to the Dutch, who for centuries had coped with too much water and too little land.
Economic activity in Amsterdam revived with the railroads in the second half of the nineteenth century. New shipyards and docks were built. Extravagant public buildings such as P. J. H. Cuypers’s National Museum (1876–1915) and H. P. Berlage’s famous Stock Exchange (1884–1903) celebrated both the financial boom and awakening nationalism. In 1876 Cuypers and A. L. van Gendt were commissioned to design the Amsterdam Central Station. It was the first time that such work had been trusted to an architect rather than to engineers, a decision taken because the building would hold an important place in the nation’s image. Indeed, the brief jingoistically demanded that it should be in the Oud-Hollandsche (Old Dutch) style.
That qualification presented little difficulty to Cuypers, who had developed a personal historical-revivalist manner based on late Gothic and early Renaissance forms and ideas. His abundantly decorated National Museum was already under construction. Eclectically drawing on a wide variety of styles, it did not readily expose his rationalist architectural philosophy, gleaned from E. E. Viollet-le-Duc’s theories. Cuypers wanted to restore the crafts to a place of honor and insisted on the honest application of traditional materials. He was responsible for the appearance of the station; van Gendt, thoroughly experienced as mechanical engineer for the railroad, would take care of constructional aspects.
Work commenced in 1882. The station was built on the artificial islands in the Open Havenfront of Amsterdam’s original harbor, which had been cut off from the River IJ by the railroad. Special engineering skill was needed to create a solid foundation for the massive building and the rolling loads imposed by trains. As noted, 26,000 timber piles support the structure. The four-story station building, of red brick with stone dressings, is unmistakably Dutch. It is 1,020 feet (312 meters) long and 100 feet (30.6
meters) deep. On the axis of Damrak—the main street leading to the dam in the downtown area—a central pavilion flanked with clock towers houses the main entrance to the concourse. Its facade is resplendent with ornament: the clock faces; the arms of those European cities to which the railroad gave access; and an assortment of allegorical relief sculptures wherever they could fit, aptly representing such themes as “Steam,” “Cooperation,” and “Progress.” Convinced that the building process needed the collaboration of all the arts, Cuypers sought the artistic advice and skill of others, especially J. A. Alberdingk Thijm and V. de Steurs, who had worked on the National Museum.
Late in 1884 the architect produced two sketches for the platform roof; they have been characterized as “unassuming.” But that part of the design was not in his contract, and the structure—anything but unassuming—was designed by the railroad’s own civil engineer, L. J. Eijmer. Carried on a frame of fifty semicircular, open-web trusses of wrought iron, spanning 150 feet (49 meters), the original station shed covered about 3.75 acres (1.5 hectares). During construction, problems arose over anchoring the arches, no doubt due to the foundation soil, but rejecting a suggestion to build several smaller, lighter roofs, it was resolved to proceed with the monumental design “on a scale that could compare with that of the great examples abroad.” Cuypers designed the decorative elements of the rafters and the glazed gable end. The roof was completed in October 1889. In 1922, to cover new platforms, another similar arch was added beside the IJ.
The final phase of construction was the King’s Pavilion at the station’s eastern end in 1889—in the event, an ironic title, since the kingdom of the Netherlands was to be ruled only by queens for more than a century. Coaches could be driven inside, where a stair led to the royal waiting room, all in Cuypers’s individualistic neo-Gothic style and enriched with a
color scheme by the Austrian G. Sturm and executed by G. H. Heinen. The room was restored in 1995.
The building of Amsterdam Central Station, “a palace for the traveler,” clearly demonstrates two issues that confronted architects and engineers late in the nineteenth century. First, after sixty years of building railway stations, they were no closer to finding an esthetic that suited the building type, fitted the new materials and technology, and removed the unnecessary tension between utility and beauty. Second, and related to the first, the nature of architectural practice was changing as increased knowledge called for specialization and the eventual replacement of the omniscient, not to say omnipotent, architect by a design team: architect, yes, but also mechanical engineer, structural engineer, interior designer, and consultant artist. That idea would not be enunciated until Walter Gropius wrote the Bauhaus manifesto in 1919.

Alpine railroad tunnels Switzerland

Switzerland’s government-owned, 3,100-mile (5,000-kilometer) railroad network is world renowned for its efficiency, despite the difficulties imposed by the mountainous terrain. Two of the four major rail links that pass through the small, landlocked country to connect northern Europe and Italy cross the 13,000-foot-high (4,000-meter) Swiss Alps. That access was made possible only by the remarkable engineering feats embodied in the construction, between 1872 and 1922, of the St. Gotthard, Simplon, and Lötschberg Tunnels, drilled through the rock thousands of feet underground. However, the Swiss were not the first to conquer the mountains.
The earliest European alpine railroad tunnel, the Frejus Tunnel, was drilled through Mont Cenis to connect Bardonecchia in the Italian province of Savoy (north of the Alps), through Switzerland, with Modena on the Italian peninsula. King Carlo Alberta of Sardinia championed the scheme in 1845, and his successor Victor Emmanuel II took it up in 1849. Drilling did not begin on the 8-mile (13-kilometer) double-track tunnel—over twice the length of any before attempted—until late 1857, supervised by the engineer Germain Sommeiller (1815–1871), assisted by Sebastiano Grandis and Severino Grattoni. Sommeiller patented the first industrial pneumatic drill, which greatly expedited the work. Finished in 1870, the tunnel was opened, in 1871, just two months after his death.
The following year, work began on a 100-mile (160-kilometer) railroad, the Gotthardbahn, which crossed the Lepontine Alps in south-central Switzerland to link Zurich, at the heart of the country’s northern commercial centers, with Chiasso at the Italian frontier. Before then the way across the Alps, used for 800 years, was over the 6,935-foot (2,114-meter) St. Gotthard Pass. A road was built in the 1820s. Alfred Escher the founder of Credit Suisse, was the initiator of the Gotthardbahn, and as its president, with Emil Welti he negotiated German and Italian cooperation for the project in 1869–1871. Two feeder lines meet at Arth-Goldau; from there the mountain section runs through Brunner, Fluelen, and Altdorf to Erstfeld. There it commences the steep climb to Goeshenen at the northern end of the St. Gotthard Tunnel. Designed by the Geneva engineer Louis Favre, the double-track tunnel is 9.25 miles (15 kilometers) long, passing through the mountain 5,500 feet (1,700 meters) below the surface. The southern ramp is even steeper, and at Giornico more loops take the line to Chiasso. The tunnel was drilled from both ends, and the bores joined in 1880. The railroad was opened in 1882, when the difficult approach lines were completed. Favre had accepted punishingly tight schedules for the contract. He drove his force of 4,000 immigrant laborers to cut almost 18 feet (5.4 meters) a day—over twice that achieved in the Frejus Tunnel—in horrifying working conditions: water inrushes, rock falls, dust, and (because of the great depth) temperatures up to 102°F (39°C). About 1,000 men suffered serious injury; 310 were killed.
Twenty years later, the safety record on the Simplon Tunnel, although far from perfect, was much better. From the thirteenth century, the 6,590-foot (2,009-meter) Simplon Pass near the Swiss-Italian border was a key to trade between northern and southern Europe; and in the beginning of the nineteenth century, probably for military reasons, Napoléon I ordered a road built over it. Begun around 1898, the Simplon Railroad connects the Swiss town of Brig with Iselle, Italy. Its 12.3-mile (19.8-kilometer) tunnel—in reality two tunnels—under Monte Leone was conceived as a twin-tube single-track system by the German engineer Alfred Brandt; separate galleries 55 feet (17 meters) apart were linked with cross-hatches. Until the completion of Japan’s Seikan Tunnel in 1988, the Simplon Tunnel was the world’s longest railroad tunnel. Because of its depth—up to 7,000 feet (2,140 meters) below ground—temperatures exceeding 120°F (49°C) were faced during construction. The first gallery, Simplon I, was completed by January 1905 and traffic commenced the following year. Various problems, including the intervention
of World War I, delayed Simplon II until 1921; it was opened in 1922.
The Lötschberg Tunnel, opened in 1913, is a 9-mile (14.6-kilometer) double-track railroad tunnel between Kandersteg and Goppenstein in south-central Switzerland’s Bernese Alps. It is part of the 46-mile (74-kilometer) standard-gauge Bern-Lötschberg-Simplon Railway connecting Spietz and Brig. The branch lines from Thun and Interlaken meet at Spietz, where the main trunk leads to Frutigen and begins a steep mountain section, much like the Gotthardbahn’s, to the Lötschberg Tunnel at Kandersteg. South of the tunnel the line descends from Goppenstein to the Rhone valley, where it reaches Brig and the line to the Simplon Tunnel and Domodossola, Italy. Together, Lötschberg and Simplon completed a through-route from Germany and France to Italy.
In 1987, the Swiss government initiated further investment in its railroad network. The major part of the plan, estimated to cost EUR10 billion (U.S.$8.8 billion), is the largest construction project in Europe. Known as NEAT (for Neue Eisenbahn-Alpen Transversale, i.e., New Alpine Railroad Crossing), it involves the creation of two new 30-foot-diameter (9-meter) twin-tube alpine tunnels, suitable for high-speed trains, through the St. Gotthard and Lötschberg Mountains, respectively. Built at lower altitudes than their predecessors, they will double rail-transit capacity and significantly reduce journey times between northern and southern Europe. The first axis is expected to be in service by 2006.

Saturday, June 6, 2015

La Alhambra palace Granada, Spain

Built between 1238 and 1391, the most outstanding reminder of Granada’s glorious Moorish epoch is La Alhambra (the Red Castle), a complex of fortresses, palaces, and gardens for the Nasrid kings on a high plateau called the Cerro del Sol. Granada lies beneath it on the southeast, and beyond the city the
Andalusian plain stretches toward the mighty Sierra Nevada. It has been justifiably claimed that in La Alhambra “all the refinement, wealth and delicacy of Islamic art and architecture reached its last climax in the West.”
Following the Arab conquest of the Berbers in the seventh century a.d., intermarriage between the two peoples produced the ethnic group now known as the Moors. In 711 a Moorish army led by Tariq ibn-Ziyad swarmed across the Straits of Gibraltar, and within a little over two decades they had conquered much of Spain. They made Córdoba the center of al-Andalus (Andalusia), part of an Islamic empire extending from the borders of China and India to the Atlantic. Seville, Jaén, and Granada were soon established as seats of Islamic culture and commerce. The Visigoths were expelled from Granada in 711 by the Moors, who governed it from Córdoba until the fall of the caliphate in 1031, alter which it was ruled for two centuries by the successive Berber dynasties of the Almoravides and the Almohades. When Córdoba was taken by Christian armies in 1236, Moorish Granada grew in importance, reaching its apogee under the Nasrid kings, beginning with Ibn al-Ahmar, called Mohammed I, in 1241. Granada was the last Islamic outpost in Spain until the Treaty of Santa Fé consigned it to Ferdinand and Isabella 250 years later.
In 1238 Mohammed I repaired an irrigation channel from the Darro River to the top of the Red Hill and reinforced the ninth-century fortress known as La Alcazaba with 90-foot-high (27-meter) towers and five fortified gates. The stronghold became the kernel of La Alhambra. Mohammed II (1273–1302) extended the fortifications, and La Alcazaba was again modified as a luxurious residence for Mohammed III. In 1318 the architect Aben Walid Ismail was commissioned to design El Generalife, the Nasrids’ summer palace, among beautiful irrigated gardens on an adjoining hilltop. Although the majority of the buildings of La Alhambra cannot be as accurately dated as that, it is known that most were initiated by Yusuf I (1333–1354) and Mohammed V (1354–1391). After the surrender of Boabdil, successive Catholic kings refurbished the palace, carefully retaining the Moorish style, an approach that comments upon its sublime beauty. In the sixteenth century the Holy Roman Emperor Charles V had some of the older buildings demolished to make way for his own, designed by the architect Pedro Machuca.
The confluence of cultures in Andalusia generated the unique Moorish architecture that continues to be influential in Spain and has made an impact elsewhere, especially upon garden design. Because La Alhambra is such an accumulation of sequential elements, many of them starkly contrasting (like massive towers and delicate arcades), the paradoxical fortress-palace is almost impossible to describe. Within the forbidding utilitarian curtain wall of the fortress there are the inviting and surprising delights of the palace, built around secluded, courtyards: sumptuous halls and chambers, arcaded internal patios with pools and fountains, wooded plazas, and peaceful gardens with streams of tinkling or chattering water. All is laid out with symmetrical geometry and careful proportion, the various buildings placed in a composition of studied informality. And, as would never be suspected from looking upon the austere outer defenses, all within is profusely decorated with restrained taste in the finest materials and finishes: glazed tile skirtings; walls, friezes, and arcades replete with stucco plant motifs; and ceilings ornamented with bows and mocarae (designs of several prisms on a concave base), sometimes picked out with gold or lapis lazuli, and sometimes bearing verses from the Koran, inscribed with exquisite calligraphy.
In The Alhambra (1832) the American writer Washington Irving wistfully remarked: “A few broken monuments are all that remain to bear witness to [Moorish] power and dominion. Such is the Alhambra—a Moslem pile in the midst of a Christian land; an Oriental palace amidst the Gothic edifices of the West; an elegant memento of a brave, intelligent, and graceful people, who conquered, ruled, flourished, and passed away.” La Alhambra and the gardens of El Generalife (whose buildings are all but gone) were added to UNESCO’s World Heritage List in 1984.
fertile

Alberobello trulli Italy

The Murgia dei Trulli, with its communes of Martina Franca, Locorotondo, Cisternino, and Alberobello, is located in the Apulian interior at the upper part of the heel of Italy. Although trulli are scattered throughout
the region, more than 1,500 of them are in the Monti and Aja Piccola quarters, on the western hill of Alberobello. This unique conical house form is significant in the history of architecture because it perpetuated well into the twentieth century a construction technique practiced throughout the northern Mediterranean since prehistoric times.
The name derives from truddu, Greek for “cupola.” The clustered stone dwellings of Alberobello, small by modern Italian housing standards, are built by roofing almost square or rectangular bases (although some tend toward a circle) with approximately conical cupolas of roughly worked flat limestone slabs, stacked without mortar in corbeled courses. These gray roofs, no two of which are quite the same, are normally crowned with a whitewashed pinnacle in the form of a sphere standing on a truncated inverted cone. Some are painted with symbols: astrological signs or Christian ones, and even some of older pre-Christian significance. As is often the case with vernacular architecture, geometrical precision is not a priority: nothing is truly right-angled, nothing truly plumb. Bernard Rudofsky describes the roof as a retrocedent wall, because it also encloses habitable space that is traditionally used for storage. Typically, the inside of the roof is a parabolic dome, formed by packing the gaps between the larger structural stones. The walls of the ground floor are thick enough—they can be up to 10 feet (3.27 meters) in older houses—to include alcoves for a hearth or cupboards, or even a curtained-off recess for a bed. Doorways are low, and the interior, though whitewashed, is usually quite dingy because the windows are small, possibly for structural reasons. Curved walls make furnishing difficult. More recent trulli, the last of which were built in the 1950s, are interconnected with others to gain more living space.
The oldest documented Alberobello examples date from the fifteenth century, coinciding with the foundation of a permanent agricultural community centered in the town. However, the essential building technique and the consequent house form are much older. The type, clearly related to the prehistoric nuraghi of Sardinia and the rather more sophisticated Mycenaean tholos, has been archeologically linked to both the nomadic pastoral Early Bronze culture and permanent agrarian communities in the Apennine region. Remarkably, similar constructions can be found in the middle of Scotland and on the west coast of Sweden.
A plausible and somewhat romantic tradition dates the development of trulli as the house form of Alberobello to a single historical event. It is said that in the eighteenth century the local ruler Count Girolamo II of Acquaviva compelled the peasant farmers to build their houses with mortarless stone roofs. Because drywall structures were tax-exempt, and because they could be (relatively) easily dismantled before the regular visits of inspectors from Naples, he chose this method of tax avoidance. Although the people were freed from his regulation by a decree from Ferdinando IV of Bourbons in May 1797, the house form persisted, perhaps because of rural conservatism. Trulli are no longer built by the traditional technique and in the traditional style, but some of the master builders are still living, and the craft skills have not yet been lost. After the mid-1950s the “romantic” trulli were noticed by tourists and real-estate agents, and that has been to the detriment of many of them. Since the inclusion of the Alberobello precinct on UNESCO’s World Heritage List in 1996, serious archeological study has been undertaken, and the old craft skills have been applied to an extensive restoration program.

Akashi-Kaikyo Bridge Kobe, Japan

The graceful Akashi-Kaikyo Bridge, linking Kobe City and Awajishima Island across the deep straits at the entrance to Osaka Bay, was opened to traffic on 5 April 1998. Exploiting state-of-the-art technology, it formed the longest part of the bridge route between Kobe and Naruto in the Tokushima Prefecture, completing the expressway that connects the islands of Honshu and Shikoku. With a main span of 1.25 miles (1.99 kilometers) and a total length of nearly 2.5 miles (3.91 kilometers), it was then the longest suspension bridge ever built.
With the growing demand for faster land travel, more convenient links over water obstacles become necessary. If long-span—say, over 1,100 yards (1,000 meters)—bridges are to be politically, economically, and structurally viable, design must be optimized. Because a bridge’s selfweight increases in direct proportion to its span, the structure must be as light as possible while achieving minimum deformation and maximum stiffness under combined dead, wind, and traffic loads. A cable-supported suspension bridge is an ideal way to achieve that.
Alternative designs were developed for the Akashi-Kaikyo Bridge, considering a range of main span lengths. The most economical length was between 6,500 and 6,830 feet (1,950 and 2,050 meters); the final choice of 6,633 feet (1,990 meters) was constrained by geological and topographical factors. The length of the side spans was fixed at 3,200 feet (960 meters), enabling the cable anchorages to be located near the original shorelines. The clients insisted that, because of its immense span, the form of bridge had to assure the public that it would withstand all kinds of loads, including typhoons and earthquakes. Also, it had to express the essential beauty of the Seto-Inland Sea region and evoke a bright future for the Hyogo Prefecture. The Akashi-Kaikyo Bridge would be painted green-gray because it was redolent of the forests of Japan.
Construction began in May 1988. The reinforced concrete anchorages for the cables on the respective shores are of different sizes, because of different soil conditions. As an indication, the one at the Kobe end has a diameter of 283 feet (85 meters) and is 203 feet (61 meters) deep. It is the largest bridge foundation in the world.
Huge cylindrical steel chambers (caissons) form the foundation of the main towers. Fabricated off-site, they are 217 feet (65 meters) high—more than a 30-story building—and 267 feet (80 meters) in diameter; each weighs 15,000 tons (15,240 tonnes). To provide a level base, an area of seabed about as big as a baseball field was excavated under each of them. They were floated into position, and their exterior compartments were flooded to carefully sink them in 200 feet (60 meters) of water. This was achieved to within a 1-inch (2.54-centimeter) tolerance. Each was then filled with 350,000 cubic yards (270,000 cubic meters) of submarine concrete. The foundations of the bridge were seismically designed to withstand an earthquake of Richter magnitude 8.5, with an epicenter 95 miles (150 kilometers) away. On 17 January 1995 the Great Hanshin Earthquake (magnitude 7.2) devastated nearby Kobe; its epicenter was just 2.5 miles (4 kilometers) from the unfinished bridge. A careful postquake investigation showed that, although the quake had lengthened the bridge by about 3.25 feet (1 meter), neither the foundations nor the anchorages were damaged. As the builders boasted, it was “a testament to the project’s advanced design and construction techniques.”
The towers rise to 990 feet (297 meters) above the waters of the bay (for comparison, those on the Golden Gate Bridge are 750 feet [230 meters] high). They have steel shafts, each assembled in thirty tiers, generally made up of three prefabricated blocks that were hoisted into place and fixed with high-tensile bolts. The shafts are cruciform in cross section,
designed to resist oscillation induced by the wind. The main cables, fixed in the massive anchorages and passing through the tops of towers, were spun from 290 strands of galvanized steel wire—a newly developed technology—each containing 127 filaments about 0.2 inch (5 millimeters) in diameter. Their high strength does away with the need for double cables, and because they achieve a sag:span ratio of 1:10, the height of the main towers could be reduced. To prevent corrosion of the cables in the salt atmosphere, dehumidified air flows through a hollow inside them, removing moisture. The towers and the suspended structure are all finished with high-performance anticorrosive coatings to suit the demanding marine environment.From the main cables, polyethylene-encased, parallel-wire-strand suspension cables support the truss-stiffened girder that carries a six-lane highway with a traffic speed of 60 mph (100 kph). The preassembled truss members were hoisted to the deck level at the main towers, carried to their location by a travel crane, and connected; then the suspension cables were attached. This construction technique was chosen because it did not disrupt activity on the water, where 1,400 ships daily pass through the straits

Airship hangars Orly, France

The French dominated the early history of human flight. In September 1783 the Montgolfier brothers launched a hot-air balloon carrying farm animals to show that it was safe to travel in the sky, and a few weeks later Pilatre de Rozier and the Marquis d’Arlandes took to the air for a 5.5-mile (9-kilometer) trip over Paris. In 1852 another Frenchman, the engineer Henri Giffard, built the first successful airship—a steam-powered, 143-foot-long (44-meter), cigar-shaped affair that flew at about 6 mph (10 kph). About thirty years later Charles Renard and Arthur Krebs constructed an electrically powered airship that was maneuverable even in light winds. By 1914 the French military had built a fleet of semirigid airships, but they proved ineffective as weapons in the Great War. On the other hand, nonrigid airships were widely used for aerial observation, coastal patrol, and submarine spotting. Their advent generated a different type of very large building: the airship hangar. The first zeppelin shed at Friedrichshafen, Germany (1908–1909), had been 603.5 feet long, 151 wide, and 66 high (184 by 46 by 20 meters). Like most others built Europe, it was a steel-lattice structure with a light cladding. Much more inventive and spectacular were the parabolic reinforced concrete hangars built in from 1922 to 1923 on a small military airfield among farmlands at Orly, near Paris. They were a major achievement of engineering and architecture.
The French engineer-architect Marie Eugène Léon Freyssinet (1879–1962) studied at the École Polytechnique and the École Nationale des Ponts et Chaussées in Paris. After serving in the army in World War I he became director of the Societé des Enterprises Limousin and later established his own practice. A great innovator, he worked mainly with reinforced concrete, building several bridges. By 1928 he was to patent a new technique, prestressing, that eliminated tension cracking in reinforced concrete and solved many of the problems encountered with curved shapes. Simply, steel reinforcing cables were stretched and the concrete poured around them; when it set the cables were released and (because it was in compression) the structural member acquired an upward deflection. When it was loaded in situ the resulting downward deflection brought it back to the flat position while remaining in compression.
At Orly, Freyssinet was presented with a brief that called for two sheds that could each contain a sphere with a radius of 82 feet (25 meters), to be built at reasonable cost. He responded by designing prestressed reinforced concrete buildings consisting of a series of parallel tapering parabolic arches that formed vaults about 985 feet long, 300 wide, and 195 high (300 by 90 by 60 meters). The internal span was about 266 feet (80 meters), and each arch was assembled from 25-foot-wide (7.5-meter) stacked, profiled sections only 3.5 inches (9 centimeters) thick; those at the base of the arch were 18 feet (5.4 meters) deep and those at the crown 11 feet (3.4 meters). Placed side by side, they formed a very stiff corrugated enclosure. Starting at a height of 65 feet (20 meters), reinforced yellow glass windows were cast in the outer flanges of the arches.
Freyssinet specified an easily compactable concrete to ensure that the hangars would be waterproof. It was reinforced with steel bars and poured into reusable pine formwork that was itself stressed with tension rods to create prestressed concrete. The concrete was also designed to flow into every corner of the complicated molds, and it was fast-setting so that formwork could be quickly stripped and reused. The structure was temporarily supported on timber centering, and a network of cables held the formwork in tension until the concrete developed its full strength. In other structures lateral wind loading could be resisted by cross bracing, but because clear spans were imperative, Freyssinet provided the necessary stiffening by “folding” the concrete on the component arches. The selfweight of the massive structure was accommodated by increasing the cross-sectional area of the arches as they approached the ground, where the foundations consisted of deep horizontal concrete
pads laid with an inward slope toward the center of the hangars. Tragically, in 1944, U.S. aircraft bombed these revolutionary and beautiful structures.

Airplane hangars Orvieto, Italy

The Italian engineer and architect Pier Luigi Nervi (1891–1979) was among the most innovative builders of the twentieth century and a pioneer in the application
of reinforced concrete. In 1932 he produced some unrealized designs for circular aircraft hangars in steel and reinforced concrete that heralded the remarkable hangars he built for the Italian Air Force at Orvieto. None have survived but they are well documented: more than enough to demonstrate that they were a tour de force, both as engineering and architecture.
Nervi had graduated from the University of Bologna in 1913. Following World War I service in the Italian Engineers Corps he established an engineering practice in Florence and Bologna before moving to Rome, where he formed a partnership with one Nebbiosi. Nervi’s first major work, the 30,000-seat Giovanni Berta Stadium at Florence (1930–1932), was internationally acclaimed for its graceful, daring cantilevered concrete roof and stairs. The revolutionary hangars followed soon after.
There were three types, all with parabolic arches and elegant vaulted roofs that paradoxically conveyed a sense of both strength and lightness. The first type, of which two were built at Orvieto in 1935, had a reinforced concrete roof made up of a lattice of diagonal bow beams, 6 inches (15 centimeters) thick and 3.7 feet (1.1 meters) deep, intersecting at about 17-foot (5-meter) centers. They supported a deck of reinforced, hollow terra-cotta blocks covered with corrugated asbestos-cement. The single-span roof measured 133 by 333 feet (40 by 100 meters), and its weight was carried to the ground through concrete equivalents of medieval flying buttresses. The 30-foot-high (9-meter) doors that accounted for half of one of the long sides of the hangar were carried on a continuous reinforced concrete frame.
In the other types Nervi’s fondness for structural economy led to the prefabrication of parts, saving time and money. Type two was his first experiment with parallel bow trusses assembled from open-web load-bearing elements, spanning the 150-foot (45-meter) width of the hangar. A reinforced-concrete roof covering provided stiffening. The third type combined the diagonal configuration of the first and the prefabrication techniques of the second. He built examples of it six times between 1939 and 1941 for air bases at Orvieto, Orbetello, and Torre del Lago. The massive roofs, covered with corrugated asbestos cement on a prefabricated concrete deck, were supported on only six sloping columns—at each corner and the midpoints of the long sides—that carried the weight and thrust beyond the perimeter of the hangars. All the components were cast on-site in simple wooden forms.
The Germans bombed these amazing structures as they retreated from Italy toward the end of World War II. Nervi was delighted to learn that, even in the face of such a tragedy, the prefabricated joints had held together despite the destruction of his hangars. He later included them amongst his most “interesting” works, observing that their innovative forms would have been impossible to achieve by the conventional concrete technology of the day. In the early 1940s Nervi extended his experiments to ferrocimento—a very thin membrane of dense concrete reinforced with a steel grid—which be used to build a number of boats.
He next combined that material with the prefabrication techniques he had developed for the hangars. For Salone B at the Turin Exhibition of 1949–1950, he designed a 309-by-240-foot (93-by-72-meter) vaulted rectangular hall with a 132-foot-diameter (40-meter) semicircular apse at one end. The main hall roof and the hemidome over the apse consisted of corrugated, precast ferro-cimento units less than 2 inches (5 centimeters) thick, supported on in situ buttresses, creating one of the most wonderful interior spaces of the twentieth century.
Nervi’s designs were too complex to be calculated by orthodox mathematical analysis, and he developed a design methodology that used polarized light to identify the stress patterns in transparent acrylic models. A few unbuilt projects were followed by three structures for the 1960 Rome Olympic Games. He built the Palazzo dello Sport (1959, with Marcello Piacentini), the Flaminio Stadium (1959, with Antonio Nervi), and the Palazzetto dello Sport (1957, with Annibale Vitellozzi). The last is a gem of a building whose rational structure is so transparently expressed that the observer can almost see the loads being shèpherded to the ground in a way redolent of late English Gothic fan vaulting

Afsluitdijk The Netherlands

The 20-mile-long (32-kilometer) Afsluitdijk (literally, “closing-off dike”), constructed from 1927 to 1932 between Wieringen (now Den Oever) and the west coast of Friesland, enabled the resourceful Dutch to turn the saltwater Zuider Zee (South Sea) into the freshwater IJsselmeer and eventually to create an entire new province, Flevoland. Like their successful responses to similar challenges before and since, it was an audacious and farsighted feat of planning, hydraulic engineering, and reclamation.
Throughout their history, the Netherlanders have fought a battle against the water. Much of their tiny country is well below average sea level, in places up to 22 feet (7 meters). The threat of inundation comes not only from the sea but also from the great river systems whose deltas dominate the geography of Holland. Over centuries, literally thousands of miles of dikes and levees have been built to win agricultural land back from the water, and having gained it, to protect it. From the seventeenth century Amsterdam merchants invested their profits in building the North Holland polders—Beemstermeer, the Purmer, the Wormer, the Wijde Wormer, and the Schermer—reclaimed through the ingenious use of the ubiquitous windmill.
In 1250 the 79-mile-long (126-kilometer) Omringdijk was built along Friesland’s west coast to protect the land from the sea, and as early as 1667 the hydraulic engineer Hendric Stevin bravely proposed to close off the North Sea and reclaim the land under the Zuider Zee. His scheme was then technologically impossible. The idea was revived in 1891 by the civil engineer and statesman Cornelis Lely. Based on research undertaken over five years, his plan was straightforward: a closing dike across the neck of the Zuider Zee would create a freshwater lake fed by the River IJssel and allow the reclamation of 555,000 acres (225,000 hectares) of polder land—in the event, 407,000 acres (165,000 hectares) were
won. Despite Lely’s assurances about the feasibility of the plan, his parliamentary colleagues were unenthusiastic. But attitudes changed after the region around the Zuider Zee was disastrously flooded in 1916; moreover, World War I (in which Holland remained neutral) convinced the Dutch government that internal transportation links needed to be improved. The Zuiderzee Act was passed in 1918.
The Zuiderzeeproject commenced in 1920 with the construction of the Amsteldiepdijk, also known as the Short Afsluitdijk, between Van Ewijcksluis, North Holland, and the westernmost point of the island of Wieringen. There were some initial foundation problems and a financial calamity for the contractor, but the dike was completed in 1926. There followed the construction of the small test polder Andijk (1927) and the Wieringermeer (1927–1930).
The key element in the daring plan was the construction of the Afsluitdijk across the Waddenzee, an arm of the North Sea. The project was undertaken by a consortium of Holland’s largest dredging firms, known as N. V. Maatschappij tot Uitvoering van de Zuiderzeewerken. All the work, involving moving millions of tons of earth and rock, was carried out manually by armies of laborers working from each end of the structure. Built during the Great Depression, the Afsluitdijk was a welcome source of employment. It was completed on 28 May 1932. It was intended later to build a railroad over the broad dike, but as the volume of road traffic increased in Holland, priority was given to a four-lane motorway. The railroad was never built, although adequate space remains for it.
The closure of the Afsluitdijk enabled the eventual reclamation of three huge tracts of land formerly under the sea: the Noordoostpolder (1927–1942), East Flevoland (1950–1957), and South Flevoland (1959–1968). They were later combined to become a new province, Flevoland, with a total area of over 500 square miles (1,400 square kilometers). Its rich agricultural land supports two cities, Lelystad and Almere, although the latter is more properly a dormitory for Amsterdam. Flevoland is on average 16 feet (5 meters) below sea level. The great freshwater body south of the Afsluitdijk was renamed IJsselmeer. Its balance, carefully controlled through the use of sluices and pumps, is determined by inflow and outflow rates, rainfall and evaporation, and storage level changes. With a surface of nearly 500 square miles (131,000 hectares), it is the largest inland lake in the Netherlands. A proposal to reclaim a fifth polder, the 230-square-mile (60,300-hectare) Markerwaard, behind a 66-mile-long (106-kilometer) dike between Enkhuizen and Lelystad was not pursued, mainly because of ecological concerns.
In February 1998 the Dutch Ministry of Transport, Waterways, and Communication published the Waterkader report, setting out national water-management policies until 2006. Aiming to keep the Netherlands safe from flooding, it presents a case for reserving temporary water-storage areas—“controlled flooding”—against times of high river discharge or rainfall. The government, recognizing that raising the dikes and increasing pumping capacity cannot continue forever, has adopted the motto “Give water more space.” The document Long-Range Plan Infrastructure and Transport of October 1998 promised to invest 26 billion guilders (approximately U.S.$13 billion) in the nation’s infrastructure before 2006. Part of the money is earmarked for waterways, including links between Amsterdam and Friesland across the IJsselmeer.