Pages

Friday 31 July 2015

10 STRANGE STORIES FROM THE EARLY STUDY OF ELEMENTS


wpsA3E2.tmp
10 Strange Stories From The Early Study Of Elements
By Debra Kelly,
Listverse, 31 July 2015.

Chemistry as we learn it in school can be a pretty dry subject that involves memorizing a lot of numbers and chemical reactions. But that doesn’t have to be the case, and it turns out that there are a lot of fascinating stories about how we first learned about the elements on the periodic table. Chemistry has a lot of neat stuff buried in its history.

10. Seven Elements From One Mine

wps2BCC.tmp
Photo credit: Svens Welt

Ytterbium, yttrium, terbium, and erbium - they’re a mouthful to say, and there’s a reason that they’re listed together. All four were found in a rather unlikely way, taking their name from the quartz quarry in Ytterby, Sweden, where they were unearthed. The quarry is known as something of a gold mine when it comes to documenting new elements. Gadolinium, holmium, lutetium, scandium, tantalum, and thulium were also found there. If it seems like that might cause confusion, it absolutely did.

In 1843, a Swedish chemist named Carl Gustaf Mosander took gadolinite and separated it into the rare-Earth materials yttria, erbia, and terbia. Once he shared his findings, though, something got lost in translation, and erbia became known as terbia, while terbia was called erbia. In 1878, the newly christened erbia was further broken down into two more components - ytterbia and another erbia. It was surmised that ytterbia was a compound that included a new element, which was named ytterbium. This compound was separated into two elements, neoytterbium and lutecium. Are you confused yet? Would it help that neoytterbium got another name change and became just plain old ytterbium again, and lutecium became luteium?

The end result of what came out of the Ytterby quarry was a whole handful of elements that were found for a couple of reasons. The mine was ripe for the picking, thanks to glacier activity during the last ice age. There was also a rather odd coincidence going on around the same time: The mine was originally opened to mine feldspar, which had recently been highlighted as a key component in the creation of porcelain. Making porcelain had been a closely guarded secret of the Far East, until some alchemists got involved. The mine at Ytterby had opened to help with the demand for porcelain, and chemist Johan Gadolin (who gave his name to some of the rocks) was working at the mine because of his friendship with an English porcelain maker.

9. Barium Was Mistaken For Witchcraft

wpsC435.tmp
Photo credit: Matthias Zepper

Today, barium is a pretty common element that’s used to make paper whiter, paints brighter, and as a colorant to block X-rays and make problems with the digestive system more noticeable in scans. In the Middle Ages, it was a well-known substance, but not as we think of it today. Smooth stones, found mostly around Bologna, Italy, were popular with witches and alchemists because of their tendency to glow in the dark after only being exposed to light for a short amount of time.

In the 1600s, it was even suggested that the so-called Bologna stones were actually philosopher’s stones. They had mysterious properties: Heating them up would cause them to glow a strange, red colour. Expose them to sunlight for even a few minutes, and they would glow for hours. A shoemaker and part-time alchemist named Vincentius Casciorolus experimented on the stone, trying everything from using it to turn other metals into gold to creating an elixir that would make him immortal. He failed, sadly, and for almost another 200 years, the rock was nothing more than an odd curiosity that was associated with the mysteries of witchcraft.

It wasn’t until 1774, when Carl Scheele (of Scheele’s green fame), was experimenting with Earth metals, that barium was recognized as something independent. Originally calling it terra ponderosa, or “heavy earth,” it wouldn’t be for another few decades until an English chemist would finally isolate and identify the element that made the witches’ stones glow.

8. Coincidental Helium

wps6316.tmp

Scientific history is filled with instances where people race to be the first to document or explain something, but finding helium ended in a bizarre tie.

The scientific community was new at studying the emissions from the Sun in the late 19th century, and they thought that the best (and perhaps only) way to do so was to look at it during an eclipse. In 1868, Pierre Jules Cesar Janssen set up shop in India, where he watched the solar eclipse and saw something new - a yellow light that was previously unknown. He knew that he needed to study it more in order to determine just what this yellow light was, so he ended up building the spectrohelioscope to look at the Sun’s emissions during the day.

In a bizarre coincidence, an English astronomer was going the same exact thing at the same exact time, half a world away. Joseph Norman Lockyer was also working on looking at the Sun’s emissions, also during the day, and he also saw the yellow light.

Both men wrote papers on their findings, sending them off to the French Academy of Sciences. The papers arrived on the exact same day, and although they were first rather ridiculed for their work, it was later confirmed, and the two astronomers shared credit for the find.

7. The Great Name Debate

wps2A3F.tmp

A lot of the names and symbols of the elements don’t seem to match, but that’s usually because the symbol comes from a Latin translation, like gold’s “Au.” The exception is tungsten, whose symbol is “W.” 

The difference comes because the element had two names for a long time. The English-speaking world called it “tungsten,” while others called it “wolfram” for a very cool reason: Tungsten was first isolated from from the mineral wolframite, and in some circles, it retained its old name until 2005. Even then, it didn’t give up without a fight, with Spanish chemists in particular arguing that “wolfram” shouldn’t have been dropped off the official information for tungsten.

In fact, in most languages other than English, “wolfram” was still used, and it’s the name that the men who found it, the Delhuyar brothers, requested be used. The word comes from the German word for “wolf’s foam,” and its use dates back to the early days of tin smelting. Before we knew anything about elements, people working the smelters recognized a certain mineral by the way it foamed when they melted it. They called the mineral “wolf’s foam” because they believed that its presence consumed the tin they were trying to extract from the mineral in the same way a wolf would consume its prey. Today, we know it’s the high tungsten content in the ore, but chemists fought long and hard to keep their name. They lost, but the symbol for tungsten still remains a “W.”

6. Neon Lights Predate Neon

wps9A5F.tmp

Broadway and Las Vegas certainly wouldn’t be the same without the bright neon lights that have made them famous, but oddly, the creation of neon lights is an old one - one that predates knowledge of the element.

Neon is one of the noble gases and one of only six elements that’s inert. Odourless, colourless, and almost completely nonreactive, neon was found along with other noble gases argon and krypton. In 1898, chemists Morris Travers and William Ramsay were experimenting with the evaporation of liquified air when they documented the new gases. Neon was first used in 1902 to fill sealed glass tubes and create the garish, unmistakeable advertising signs that we now see everywhere.

They weren’t the first, though; what we now know as neon signs date back to the 1850s, when Johann Heinrich Wilhelm Geissler made the first neon lights. The son of a glass maker, Geissler pioneered the vacuum tube, along with the vacuum pump and the method of fitting electrodes inside the glass tubes. He experimented with a number of different gases and produced many different kinds of colours, while neon is only reddish-orange. Neon’s popularity came in part because of the colour it gives off, and also because it’s incredibly long-lasting, remaining colourful for decades.

5. Aluminium Was More Valuable Than Gold

wps4B9F.tmp

Chemists knew that aluminium was around for about 40 years before they had the technology to isolate it. When they finally did so in 1825, it became insanely valuable. Originally, a Danish chemist developed the method for extracting only the tiniest bit, and it wasn’t until 1845 that the Germans figured out how to create enough of it that they could study even its most basic properties. In 1852, the average price of aluminium was around US$1,200 per kilogram. Today, that’s the equivalent of about US$33,650.

It wasn’t until the 1880s that another process was developed that would allow for the more widespread use of aluminium, and until then, it remained incredibly valuable. The first president of the French Republic, Napoleon III, used aluminium dinner settings for only his most valued guests. The regular, run-of-the-mill guests were seated with gold or silver tableware. The King of Denmark wore an aluminium crown, and when it was chosen as the capstone of the Washington Monument, it was the equivalent of choosing pure silver today. Upscale Parisian ladies wore aluminium jewellery and used aluminium opera glasses to demonstrate just how wealthy they were.

Aluminium also formed the backbone of visions for the future. It was the biggest attraction at the Parisian Exposition of 1878, and it became the material of choice for writers like Jules Verne when they were building their grand visions of the future. Aluminium was going to be used for everything from entire city structures to rocket ships.

Of course, the value of aluminium took a steep dive when new ways were developed to create it, and it was suddenly everywhere.

4. Fluorine’s Deadly Challenge

wpsCFC4.tmp

The first observations of fluorine came from the 1500s, with a German mineralogist who described it as a material that served to lower the melting point of ore. In 1670, a glass worker accidentally found that fluorspar and acids would react and used the reaction to etch glass. Isolating fluorine proved much more difficult - and deadly.

It was our old friend Carl Scheele who determined that it was something in the fluorspar that was causing the reaction, and in 1771, the hunt for fluorine began in earnest. Before it was finally isolated by Ferdinand Frederic Henri Moissan in 1886 (earning him a Nobel Prize), the process left quite a trail of illness and injury. Moissan himself was forced to stop his work four times as he suffered from, and slowly recovered from, fluorine poisoning. The damage done to his body was so great that it’s generally thought that his life would have been incredibly shortened by it had he not died of appendicitis only a few months after accepting the Nobel Prize.

Humphry Davy’s attempts would leave him with permanent damage to his eyes and fingers. A pair of Irish chemists, Thomas and George Knox, also worked extensively on trying to isolate fluorine, with one dying and the other left bedridden for years. A Belgian chemist also died in his attempts, and a similar fate befell French chemist Jerome Nickels. In the 1860s, George Gore’s work resulted in a few explosions, and it was only when Moissan stumbled onto the idea of lowering the temperature of his sample to –23 degrees Celsius (–9 °F) and then trying to isolate the highly volatile liquid that fluorine was successfully documented for the first time.

3. The Element Named For The Devil

wpsAF19.tmp

Nickel is incredibly common today, used as an alloy and lending its name to a US coin (that’s really only about 25 percent actual nickel). The name is something of an oddity, though. While many elements are named for gods and goddesses, or their most desirable characteristic, nickel is named for the Devil.

The word “nickel” is short for the German word kupfernickel. Its use dates back to an era when copper was incredibly useful, but nickel wasn’t the least bit desirable. Miners, always a superstitious lot, would often find ore veins that looked like copper but weren’t. The worthless ore veins came to be called kupfernickel, which translates to “Old Nick’s copper.” Old Nick was a name for the Devil, and he was much more than that to the miners who were labouring deep underground. The belief was that Old Nick put the fake copper veins there on purpose, partially to make the miners waste their time and also to guide them in a direction that could be deadly. Every day was potentially deadly, after all, and miners have long believed in the presence of Earth spirits who can either help or kill the interlopers sent into their underground domain.

Pure nickel was first isolated in 1751 by Swedish chemist and mineralogist Axel Fredrik Cronstedt, and the name that the miners had been calling the worthless ore for centuries stuck.

2. The Bizarre Unveiling Of Palladium

wpsC84C.tmp
Photo via Wikimedia

Palladium was documented by an incredibly under-studied genius named William Hyde Wollaston. Wollaston, who had a medical degree from Cambridge and only turned to chemistry after a long career as a doctor and inventor of optical instruments, isolated palladium and rhodium and created the first type of malleable platinum. His methods for revealing his finding of palladium to the world make for the best story, though.

After establishing a partnership with the financially well-off Smithson Tennant, Wollaston got access to a material that needed to be smuggled into England through Jamaica from what’s now Colombia - platinum ore. In 1801, he set up a full laboratory in his back garden and got to work.

His journals from 1802 talk about his new element, originally called “ceresium,” renamed “palladium” shortly afterward. Knowing that there were other researchers right behind him in their work, he had to go public with his findings. However, he wasn’t quite ready to present it formally, so he took a handful of his new element to a store on London’s Gerrard Street in Soho. He then handed out a bunch of flyers advertising a wonderful new type of silver that was up for sale. Chemists went rather mad for the whole idea, with a number of them trying to replicate the material and failing to do so. With everyone denouncing the idea that it was anything but some kind of alloy, he anonymously offered a reward for anyone who could prove it. Of course, no one could.

In the meantime, Wollaston kept working, found rhodium, and published a paper on it. That was in 1804; in 1805, he was ready to come forward with palladium and wrote a paper on his earlier find. Appearing before the Royal Society of London, he gave a talk on the properties of this strange new material, before summing it up with an admission that he had found it earlier and needed time to explore all of its properties to his satisfaction before making it official.

1. Chlorine And Phlogiston


Belief in a substance called phlogiston set back the documentation of chlorine for decades.

Introduced by Georg Ernst Stahl, the theory of phlogiston states that metals were made up of the core being of that metal, along with the substance phlogiston. Starting in the 18th century, chemists used it to explain why some metals change substance. When iron rusts, for example, it loses its iron-ness and only has its phlogiston left. The theory was an ever-evolving one, and by the 1760s, it was believed that the substance was “inflammable air,” also known as hydrogen. Other elements were referred to in terms of the theory, too. Oxygen was dephlogisticated air, and nitrogen was phlogiston-saturated air.

In 1774, Carl Scheele first produced chlorine using what we now call hydrochloric acid, and he described it in terms that we recognize pretty easily. It was acidic, suffocating, and “most oppressive to the lungs.” He recorded its tendency to bleach things and the immediate death that it brought to insects. Rather that recognizing it as a completely new element, though, Scheele believed that he had found a dephlogisticated version of muriatic (hydrochloric) acid. A French chemist argued that it was actually an oxide of an unknown element, and that wasn’t the end of the arguing. Humphry Davy (whom we mentioned in his ill-fated quest for fluorine), thought it was an oxygen-free compound. This was in complete opposition to the rest of the scientific community, which was convinced that it was a compound involving oxygen. It was only in 1811, well after its first isolation and the debunking of the phlogiston theory, that Davy confirmed it was an element and named it after its colour.

Top image: The Periodic Table of the Elements. Credit: Antonio Delgado/Flickr.

[Source: Listverse. Edited. Top image added.]

11 SNAKE MYTHS, DEBUNKED


wps8A6B.tmp
11 Snake Myths, Debunked
By Mark Mancini,
Mental Floss, 29 July 2015.

These reptiles are the subject of many an urban legend, some of which aren’t too far removed from reality. Others - like the widely-believed myths listed below - are off by a mile.

1. They Dislocate Their Lower Jaws While Feeding


Watch this huge African rock python gulp down an entire antelope (unless you’re squeamish and/or a hoofed mammal). How could any animal engulf something that’s bigger than its own head? Popular wisdom holds that serpents can do so by detaching their jaws. The truth is easier to swallow.

Flexibility, not dislocation, is the name of the game. A snake’s lower jaw is split into two halves called “mandibles.” At rest, their tips touch to form the snaky equivalent of a chin. Yet, these bones aren’t fused together like ours are. Instead, a stretchy ligament connects the mandibles and enables them to separate once dinner starts. Similar equipment enhances the upper jaw’s manoeuvrability as well.

2. You Can Tell a Rattlesnake’s Age by Counting its Rattles

wps4B8A.tmp
Credit: Gary Stolz & Pharaoh Hound/Wikimedia Commons

This premise makes two false assumptions: (A) the critters get exactly one new rattle each year and (B) existing rattles are never lost. Let’s start with the first claim. After each shedding of the skin, rattlesnakes obtain another tail bulb. But, for babies and juveniles, that event can take place as often as every few weeks. In contrast, elderly specimens might only shed on a bi-annual basis. Moreover, rattles don’t last forever - over time, they become prone to breaking off.

3. Certain Snakes Are “Poisonous”


Though many of us use them interchangeably, “poisonous” and “venomous” aren’t synonyms. Poisons work by getting eaten, inhaled, or absorbed through the skin. Venom, on the other hand, is any toxic substance that gets injected into its target via fang, stinger, etc. Hence, “poisonous snakes” technically don’t exist. You’ll want to watch your step, though, because more than 600 venomous species still do.

4. Snakes Are Slimy

wps4A9.tmp
Credit: Cloudtail/Flickr

Amphibians secrete mucous all over their skin. Ergo, most frogs and toads have wet, slippery hides. Snakes, being reptiles, do nothing of the sort. Instead, they’re covered with dry scales, and can feel like smooth sand running through your fingers when held.

5. Cottonmouths Can’t Bite Underwater

wpsB9C2.tmp
Credit: Ltshears/Wikimedia Commons

When your scientific name (Agkistrodon piscivorus, pictured above) literally means “hooked-toothed fish-eater,” people naturally assume that you spend a lot of time in and around water. This assumption isn't wrong: throughout the American southeast, these semiaquatic predators are a common sight. However, familiarity doesn’t always breed understanding. Despite their knack for hunting prey while submerged, one dangerous myth claims that cottonmouths can’t strike underwater. They can and do. So, whether you’re out hiking or going for a dip, please exercise caution around them.

6. They’re Mostly Tail

wps4EDF.tmp
Credit: Uwe Gille/Wikimedia Commons//CC BY-SA 3.0

Here’s an inside look at a generalized snake. As you can see, serpentine survival depends on numerous vital organs (housed between two rows of ribs). Notice that empty, white area near the end? That’s the tail, which usually doesn’t even take up a fifth of the snake’s total body length. Regardless, it can still take on important functions. Consider the aptly-named spider-tailed viper, whose tail tip apparently lures over arachnid-eating birds because it comes with long, skinny scales that resemble spider legs.

7. Snakes are deaf

wps7A41.tmp
Credit: Parag Sankhe/Flickr

Since they lack eardrums, naturalists once thought that our serpentine friends couldn't hear airborne noises. Fairly new research disproves this. Snakes still possess inner ears, which connect to their jawbones. While resting or slithering, they can sense vibrations in the ground (such as footsteps). Once vibrations are picked up by the jaw, the sound waves are sent to the brain and processed.

So what about vibrations that pass through the air? In 2011, biologist Christian Christensen monitored the brains of a few ball pythons (Python regius). As he discovered, his test subjects had no trouble hearing low-frequency airborne sounds because their skulls vibrated in accordance with them. However, Christensen’s pythons weren’t as sensitive to higher-pitched noises.

While further research may disprove this theory, it is generally believed that cobras sway to the music of snake charmers not because of the sounds emanating from their instruments, but because the animals interpret the flute in motion as a potential threat.

8. Milk Snakes Drink…Well, Milk

wps3ECB.tmp
Credit: Danny Steaven/Wikimedia Commons

One can find folks who genuinely believe that these harmless little guys will grab onto cow udders and start chugging milk (hence their common name). Obviously, this doesn’t happen. For starters, reptiles can’t digest dairy products. Also, a typical bovine wouldn’t blithely stand still as needle-like teeth dug into a rather sensitive area.

9. Rattlesnakes Always Rattle Before Lashing Out

wpsB17B.tmp
Credit: Gregory "Slobirdr" Smith/Flickr

Snakes may not be the spiteful villains you see in cartoons, but when danger strikes, they sometimes can’t help but strike back. Rattlers warn potential attackers by vibrating their trademark tails. But here’s the thing: they don’t have to sound the alarm. On occasion, they’ll just skip the rattling entirely. Always tread carefully through rattler country.

10. Baby Snakes Inject More Venom Than Adults Do

wps4DAC.tmp
Credit: brian lee clements/Flickr

Technically, the jury’s still out on this one, but scientists lack any compelling evidence to support it. Old-school rumours assert that, among venomous species, babies deliver more potent bites because they haven’t yet learned self-control and will inject far more venom than necessary. Seasoned adults, meanwhile, are said to use more conservative doses.

No study has yet verified that snakes consciously dictate how much venom they dish out. Furthermore, even a small nip from a full-sized specimen probably expels more of the stuff than the biggest bites from hatchlings of the same species ever could.

11. Constrictors asphyxiate their prey

wpsCF35.tmp
Credit: karoH/Wikimedia Commons

Last week, a new paper - published in The Journal of Experimental Biology - put the strangulation theory to rest for good. Boas and pythons have long been accused of fatally choking their victims. But it turns out that they actually kill by halting blood flow. Dr. Scott Boback and his colleagues deduced as much by measuring constriction’s effects on the heart rate, blood iron balance, blood gasses, and blood pressure of anesthetized rats. Within seconds, the team learned, an ordinary boa can wrap tightly enough around its next meal to stop circulation altogether.

Top image: Black mamba. Credit: Daniel Coomber/Flickr.

[Source: Mental Floss. Edited. Some images added.]

INFOGRAPHIC: HOW THE INTERNET CHANGES YOUR BRAIN


wpsEE35.tmp
How the Internet Changes Your Brain [Infographic]
By Chris Zook,
Webpage FX, 16 July 2015.

The Internet is a huge part of everyday life. What started as a project to share data among West Coast universities has exponentially grown and taken a life of its own.

Still, the Internet is relatively new to the world. Computer scientists laid the foundation for the Internet in 1969, and in the past 45 years it’s become the fastest and most efficient avenue of information exchange in history. With text, photo, video, games, social networks, and more, it’s no wonder Millennials reportedly average three-and-a-half hours on the Internet every day - and they’re not even the generation that’s fully grown up with Internet access.

But does all of this time spent online have consequences?

That’s what I wanted to check out. I love the Internet - it’s a big reason why I work at Webpage FX - and when I first started researching this topic, I didn’t think I’d find anything conclusive on how the Internet affects people’s brains. I was really just curious.

But with help from researchers and psychology publications, I found some compelling evidence that stopped me in my tracks. And while I stubbornly didn’t want to believe it all at first, I can’t refute the research - the Internet really does change your brain.


[Source: Webpage FX.]

Thursday 30 July 2015

10 ‘WHAT IF’ SCENARIOS ABOUT THE EARTH’S GEOGRAPHY AND CLIMATE


wps9042.tmp
10 ‘What If’ Scenarios About The Earth’s Geography And Climate
By David Tormsen,
Listverse, 30 July 2015.

Alternate history usually examines the consequences and implications of different decisions made by humans at certain times in history. But unless we live in a completely deterministic universe, we can go back even further into deep time to explore the possibilities of very different Earths.

10. What If Pangaea Had Never Broken Up?

wps487E.tmp

From 300 million to 200 million years ago, the world’s continents were fused as a single landmass called “Pangaea,” slowly drifting apart to create the continents we know today, while causing some interesting situations like India crashing headlong into Asia’s underside and raising the Himalayas. But what if tectonic drift had never happened and Pangaea still dominated one hemisphere with a great Tethys world ocean on the other?

Most likely, we would have a less diverse world biologically because the development of different species occurs mainly through geographic isolation, which causes selective pressures and the development of new genetic traits. Much of the interior would be arid as moisture-bearing clouds wouldn’t reach very far inland. With the excess mass affecting the Earth’s spin, most of the landmass on Earth would be in the hot equatorial regions.

When compared to our world, Earth would be roughly 20 degrees Celsius (68 °F) hotter during summer. It would also experience massive typhoons due to the enormous circulation system in the Tethys that would be unimpeded except by island chains or shallow continental shelves.

During the second historical Pangaean period, mammals dominated in tropical and water-rich monsoon areas while reptiles dominated the great arid areas, largely because mammals use more water when they excrete. Studies of a transection of Pangaean fossil records show tropical regions dominated by traversodont cynodonts, an extinct order of pre-mammals, while the temperate regions were largely occupied by procolophonoids, which resembled stocky lizards and are distantly related to modern turtles.

Different regions of a modern Pangaea might have been dominated by completely different orders of life, a variety of tropical mammal and mammal-like creatures populating the hot and wet regions, and reptiles and pseudoreptiles ruling the roost in the dry interior and temperate regions. Intelligent life would have been unlikely to develop due to the relative stasis of the environment, but if it had, its effect on the opposite climatic region would have been dire.

9. What If The Earth Had No Tilt?

wpsBCD4.tmp

During the year, seasons result from our tilted Earth revolving around the Sun and exposing the different hemispheres to various levels of sunlight. Without the Earth’s 23-degree tilt, every day would have approximately 12 hours of daylight for each region on the Earth, while the Sun would be on the horizon forever at the poles.

The weather would be much more uniform, although there would be some changes due to variation in the distance from the Earth to the Sun over the year. The northern latitudes would experience a constant winter environment, while the equatorial regions would be humid tropics with heavy rainfall. Walking north or south from the equator, you would encounter regions with perpetual summer, then temperate spring or autumn, and finally a wintry wonderland becoming more uninhabitable as you approached the poles.

Many people believe that the Earth’s tilt was caused by a collision with a large object, an event that also caused the formation of the Moon. According to the Rare Earth Hypothesis, this was a good thing for the development of life. A planet with no tilt might not support an atmosphere, with gases evaporating into space from the full blast of sunlight at the equator and freezing and falling to Earth at the poles.

If life survived, the situation could still be disastrous for any intelligent species like us. With seasons non-existent and constant rainfall in the tropics, growing crops the traditional way would be impossible. Disease would also be more prevalent around the equator. If an intelligent species like us did develop, they would have little impetus to start an industrial revolution, which was largely driven by technologies first made for heating homes in cold winter months.

8. What If The Earth Had A Different Tilt Or Rotation?

wps3ACF.tmp

Altering the Earth’s tilt would drastically alter the climate and environment, with the difference in the angle changing the amount of sunlight reaching the Earth and the strength of the seasons. If the Earth was tilted a full 90 degrees, seasonal changes would be the most extreme. As the Earth revolved around the Sun, the poles would alternately point directly at the Sun and be perpendicular to the Sun. One hemisphere would be bathed in sunlight and hot temperatures, and the other in frigid darkness. Three months later, both poles would have a low Sun angle and our equatorial regions would have 12 hours of Sun and 12 hours of night per day, with the Sun rising in the north and setting in the south.

Life developing on such a world may be unlikely due to yearly cycles of radiation sterilization in summer and deep freezing in winter, though some organisms on Earth known as extremophiles might be capable of surviving such conditions. If such extremophiles developed into complex life, they would probably have strong hibernation or migration adaptations.

Artist and avid dreamer Chris Wayan has explored a number of scenarios by altering the points around which the Earth rotates. He maintained the 23.5-degree angle but altered the locations of the poles on the Earth’s surface. In one scenario dubbed “Seapole,” he tilted an Earth globe to place both poles above water then extrapolated the effect on the climate. By removing the ice domes of Antarctica and Greenland, he created a much warmer and wetter world with a potentially higher biomass and diversity.

A reverse scenario called “Shiveria” placed ice caps over land on both ends (China and northern South America), creating a generally colder and drier world. However, Antarctica would be tropical and the Mediterranean a hothouse he calls “the Abyss.”

Flipping the Earth upside down completely reverses water currents, winds, and rainfall patterns, creating a world where China and North America are deserts but the overall situation is probably more fertile for life. XKCD also explored the idea, rotating the Earth to put the poles on the equator, a scenario very similar to Shiveria. The site explores the implications for our world cities, turning Manila into the equivalent of Reykjavik, Moscow into an arid desert, and London into a sweltering metropolis.

7. What If South America Was An Island Continent?

wpsAA82.tmp

From the late Jurassic until about 3.5 million years ago, North and South America were separated by water. Independent evolution continued on both continents for almost 160 million years with some limited biotic exchange via nascent Caribbean islands from 80 million years ago and the Central American peninsula from 20 million years ago.

At that time, South America, like Australia, was dominated by marsupials, while also having a number of bizarre, placental, hooved animals (including the first camels) and the edentate (“lacking teeth”) ancestors of anteaters, armadillos, and sloths. North America, Eurasia, and Africa were dominated by placental mammals without any surviving marsupial species.

All living marsupials actually originated from South America, with kangaroos and opossums sharing genetic ancestors. The South American marsupials may have included many carnivorous species - pouched predators called “borhyaenoids” that resembled weasels, dogs, bears, and saber-toothed tigers - although we can’t be sure they actually carried their young in pouches.

When the two American continents connected, North American mammals spread across South America, outcompeting most marsupial species. Meanwhile, South American reptiles, birds, and a small number of mammals moved north.

If the two continents had remained separate, it’s likely that many of the marsupials would have survived into the present, creating an environment as wild and alien as Australia. Regrettably, if humans or a close analogue had arrived, they would have probably brought placental mammals from Eurasia with them, potentially causing an extinction crisis similar to that faced by Australian marsupials in our world.

6. What If The Mediterranean Had Stayed Closed?

wps1D90.tmp

Roughly six million years ago, the Strait of Gibraltar closed, with the Mediterranean connected to the Atlantic by only two small channels. The results were dire. As tectonic pressures pushed Africa toward Europe, the channel allowing water to flow out was sealed, but salty water continued to rush in from another. Unable to exit, the water in the Mediterranean began to evaporate, creating a vast salty brine like a massive Dead Sea, with a 1.6-kilometre-high (1 mi) layer of salt forming on the seafloor and most of the sea life in the Mediterranean going extinct. This was the “Messinian salinity crisis.”

After hundreds of thousands of years, the Mediterranean was reconnected to the Atlantic in the “Zanclean flood.” In that event, the sea rapidly refilled, land bridges flooded between Europe and North Africa, animal species became isolated on islands where they underwent speciation, and Atlantic marine species were forced to quickly adapt to recolonize the Mediterranean.

What if this had never happened, and the Mediterranean had remained a desiccated salt pan? It’s likely that human beings would have reached Europe much earlier than in our world, simply migrating through the salty lowlands rather than taking a long detour through the Middle East.

Salt is a valuable resource. As civilization developed, it’s likely that cultures in the region would have exploited this resource, trading it to far-off regions of Africa and Asia. With salt necessary for human survival when eating a diet rich in cereals, its increased availability might have caused agriculture to develop faster and more successfully in the Western hemisphere.

That said, salt might been considered less valuable for having been more prolific, possibly with less religious or symbolic value as a cheap commodity. Saying someone was “worth their salt” might have become an insult rather than praise.

5. What If There Were No Large Metal Deposits On Earth?

wps91B7.tmp

Humans and animals require metals to survive. But what if metals like copper had never been concentrated into exploitable deposits, or they had all been located in regions inaccessible to early man, such as under the ice caps or the ocean? While the development of more efficient, advanced Stone Age technologies would have continued, it’s likely that entire avenues of development would have been blocked to humanity (or any intelligent life arising on such a world).

Even without metals, there would have been a transition out of the classic Neolithic era as the agricultural revolution saw the rise of settlements and more concentrated populations. The plough and the wheel would still have revolutionized life for this Stone Age society, but a lack of useful metals might have stunted the development of mining, trade, and social classes. The existence of sophisticated civilizations without metals in the Americas suggests something similar would have developed in Eurasia. However, if the lack of metal deposits also included gold and silver, the economies and art of such cultures might have appeared rather drab.

In Mesoamerica, the relative lack of metals led to the sophisticated use of the volcanic glass obsidian, which can be as sharp as a modern scalpel but is also rather brittle. The ancient Aztecs used obsidian to make swords edged with multiple glass blades as well as arrowheads, spears, and knives. It had deep religious significance, and its natural sharpness is one of the reasons Aztec culture was so enamoured of self-sacrifice. With the sharp blades, cutting one’s tongue or ear to release blood in religious rituals wouldn’t have hurt as much as we might imagine.

Obsidian imported from Ethiopia and the Near East was also used in Egypt. However, its use to make knives and sickle blades in the pre-dynastic period was slowly phased out as metallurgy developed, although it still had a place as an artistic material. Without metals, Egyptian civilization may have had a greater need for control of obsidian, expanding into the Near East and East Africa to secure key sources. In Europe, one of the richest sources of obsidian was the region around the Carpathian Mountains, from which another culture of glass-edged sword wielders might have emerged.

It’s unclear just how sophisticated a culture using only glass, stone, and ceramics might have become. Many advances in transportation, cooking, and engineering might have been impossible. Certainly, there couldn’t have been an industrial revolution as we know it. Although societies might have developed advanced knowledge of medicine and astronomy, they would have been unlikely to ever reach the Moon.

4. What If The Sahara Was Still Wet?

wps36F1.tmp

Until around 5,000 years ago, the Sahara was a lush land of lakes and grasslands, inhabited by hippos and giraffes. This was the African humid period, and it is still not clear to scientists exactly how it began and ended. This climate allowed early humans to migrate out of Africa. Otherwise, the Sahara would have proven to be a serious impediment. The transition to the present desert conditions probably happened about 3,000 years ago, forcing the inhabitants to migrate to more habitable regions.

But what if the humid period had never ended? During this period, there were several large lakes in southern Libya. Lake Chad was also much larger. Around these lakes, civilizations that used tools and created art left many bones and artefacts that are now buried in the forbidding sands. In 2000, a team of palaeontologists searching for dinosaur bones in southern Niger stumbled upon the remains of dozens of human individuals. They also found clay potsherds, beads, and stone tools as well as the bones of hundreds of crocodiles, fish, clams, turtles, and hippos.

In 2003, a follow-up expedition discovered at least 173 burial sites. According to the design of the pottery shards, these tribes were identified as the extinct Kiffian and Tenerian tribes. Meanwhile, fossil records have shown desert areas of the Sudan were once home to vast herds of cattle.

Historically, the desert acted as a barrier separating sub–Saharan African cultures from those in North Africa and the Mediterranean. With technological developments of the Fertile Crescent unable to spread easily across the Sahara, many Eurasian innovations either never arose in sub–Saharan Africa or had to be independently developed.

On the other hand, a lush Sahara would have sparked the development of settled towns, cities, and centralized governments in the region from an early period. In addition to increasing the area occupied by civilized peoples and the reach of the great ancient trade networks, there would have been more genetic, linguistic, and cultural mixing between Africa and Eurasia as well.

The existence of tropical diseases might have been a problem in some areas. It’s also likely that the cultures of a wet Sahara would have had varying levels of development, just like other regions. But overall, there would have been a higher level of human civilization, probably leading to increased development. The Sahara might have been home to a large unified culture like China, with major effects on the development of Mediterranean and European civilizations.

3. What If There Was No Gulf Stream?

wpsCEFC.tmp

The Gulf Stream is the most important ocean current system in the northern hemisphere, stretching from Florida to northwestern Europe. It brings warm Caribbean waters across the Atlantic, warming Europe. Without the Gulf Stream, northern Europe would be as cold as Canada at the same latitude. The system is driven by the differences in temperature and salinity of seawater, with denser, colder, and saltier water from the North Atlantic flowing south until it warms up and becomes less dense. Then it flows north again.

This system has shut down several times due to influxes of freshwater and variations in the amount of solar energy hitting the Earth. The Gulf Stream returned 11,700 years ago at the end of the last ice age, which might not have happened without higher levels of energy from the Sun. In that case, northwestern Europe would have stayed in ice age conditions for a longer period, with a larger Arctic ice cap and more extensive Alpine glaciers.

The region would have been unsuitable for agriculture and the development of civilization. The inhabitants of northwestern Europe might have been more like the Saami or the Inuit than the historical cultures of our world. Western civilizations would have been limited to the Mediterranean, North Africa, and the Middle East. On the plus side, it would have probably been too cold for marauding Central Asian tribes like the Huns or Mongols to gallop in and kill everyone.

Another interesting scenario would occur if the Gulf Stream returned after the development of settled civilization. As the ice retreated, a new frontier would open to settlement and conquest for the cramped cities along the southern Mediterranean coast.

2. What If Doggerland Still Existed?

wps5FF2.tmp
Photo credit: Max Naylor

Until 8,200 years ago, there was a low-lying landmass in the North Sea that has since been dubbed “Doggerland,” or “Britain’s Atlantis.” It was a remnant of a greater Doggerland covering the entire North Sea area, a vast land of hills, marshland, heavily wooded valleys, and swamps inhabited by Mesolithic people who migrated with the seasons, hunting and gathering berries for survival. Their artefacts along with animal bones are occasionally discovered by North Sea fishermen. Climate change caused the region to be slowly flooded, forcing the inhabitants to move.

The last portion of greater Doggerland was centred around what is now Dogger Bank, lying just below the North Sea waters. Recent analysis has suggested that this last remnant and its inhabitants were wiped out 8,200 years ago by a 5-meter (16 ft) tsunami caused by the collapse of 3,000 cubic kilometres (720 mi3) of sediment, an event known as the “Storegga slide.”

But what if the Storrega slide had never occurred or if Dogger Bank had been slightly higher?

If humans had survived there, they would have had a major impact on the development of civilization, even if it were delayed due to their isolation. The Mesolithic inhabitants probably would have been replaced by Neolithic invaders from the mainland, who in turn might have been overwhelmed by Celtic invaders as in the British Isles.

Later, the Celts might have been displaced by the expansion of Germanic invaders, especially as the Celts would probably have had a lower population density in Doggerland than in the British Isles and mainland Europe. North Germanic Doggerlanders might have formed a cultural continuity between the Norse cultures and those of Britain. It’s also possible that Doggerland could have been colonized by Balts, a group that has ceased to exist, or a group that never existed in our world.

Regardless, a surviving Doggerland would still be extremely susceptible to climate change. Global warming would present many of the same existential problems facing low-lying islands in the Pacific. However, a wealthy, developed Northern European country facing imminent extinction might have more influence in addressing environmental policies in Europe.

1. What If There Had Been Slightly Less Ice During The Ice Ages?

wpsD4A.tmp
Photo via Wikimedia

In 2006, Steven Dutch from the University of Wisconsin presented a paper to the Geological Society of America about the implications of slightly less icy ice ages. He considered what would have happened if the North American ice sheets had never extended far below the Canadian border, and Scottish and Scandinavian ice sheets had never merged. This would have had three major effects: the Missouri River would have retained its original course into the Hudson Bay rather than changing to its present course, the Great Lakes and the Ohio River would never have formed, and the English Channel would not exist, either.

In our world, when the Scandinavian and Scottish ice caps formed, they created a large proglacial lake which overflowed into the ancestral Rhine-Thames river system, creating the English Channel. If the two caps had never joined, the water would have flowed north instead, leaving a land bridge connecting England to mainland Europe. The historical British defensive advantage vis-a-vis mainland Europe would have been non-existent, which would have had major effects on human migration, settlement, and cultural diffusion patterns in the West.

Meanwhile in North America, the lack of ice caps would have changed the way drainage systems worked, with the pre-Pleistocene Teays River still in existence and the Niagara River retaining its ancient course. Niagara Falls would not have existed. The easiest passage across the Appalachians would have been the St. Lawrence River, greatly changing colonization patterns. Meanwhile, the changes to the Missouri River would have removed the convenient east-to-west waterways used by the Lewis and Clark expedition in our world.

If it still had happened, expansion across the North American continent by European invaders would have been a significantly slower process due to a reduced number of navigable waterways. This would probably have occurred via the North, possibly by a people resembling a mix of English and French cultures, or even people more culturally alien than we can possibly imagine.


[Source: Listverse. Edited. Top image added.]