Symbianize Forum

Most of our features and services are available only to members, so we encourage you to login or register a new account. Registration is free, fast and simple. You only need to provide a valid email. Being a member you'll gain access to all member forums and features, post a message to ask question or provide answer, and share or find resources related to mobile phones, tablets, computers, game consoles, and multimedia.

All that and more, so what are you waiting for, click the register button and join us now! Ito ang website na ginawa ng pinoy para sa pinoy!

Science. Technology. Civilization. Culture.

attachment.php


By now we’re all familiar with the “Internet of Things” and have accepted that anything that can be digitally linked through the endless expanse of the internet no doubt will be. But what happens when one of those things is your brain? In another example of “this used to be science fiction,” that’s where we’re going next: neural digitization – our brains becoming nodes on the net.

It’s called the Brainternet and it works by converting brain waves into signals that can be livestreamed and made accessible through a web portal. The tech relies on some basic elements. Someone wears a mobile electroencephalogram (EEG) headset that captures brain wave signals. Those signals are then transmitted to a small computer that, with the help of specialized code, deciphers them to be served up as information on a website.

At the moment this is a one-way road. People on the web-portal side can see what’s happening in someone’s brain, within the limits of what EEG can offer, but can’t input info from the other direction. The creators say that’s eventually where this sort of tech is going.


“Ultimately, we’re aiming to enable interactivity between the user and their brain so that the user can provide a stimulus and see the response,” says Adam Pantanowitz, who supervised the team that created the tech at the Wits School of Electrical and Information Engineering. “In the future, there could be information transferred in both directions–inputs and outputs to the brain.”

What will enable that interactivity is, of course, our smartphones. Imagine having an app on your phone that dials up other peoples’ brains, and maybe your brain will be in their contact lists.

More immediately, though, the applications for this tech are less controversial. Its creators say it’s mainly about gaining a better understanding of how the brain works, and it offers a few practical, health-based applications.

"In the short to medium term, this mobile, portable and simple tech can enable some really forward-thinking medical applications, such as streaming brain data if a person suffers from epilepsy, or blood glucose data of a person who has diabetes," Pantanowitz told me by email. "This can allow people to interact with their own data in a unique way (through an interface or a smartphone), and allow them to store it more seamlessly (so that diagnostics can be performed), and shared with, say, a medical practitioner."

There’s nothing new about EEG, and nothing especially alarming about what the tech can enable. Plenty of devices that turn brain waves into actionable signals already exist. Some of those are basically games (like “think moving” a ball through a maze), and others are remarkable applications that circumnavigate paralysis and enable communication without speaking.

The newer element here is connectivity. It’s one thing to use brain waves to accomplish tasks, trivial or remarkable, and another to harness and translate brain activity across a network. Add interactivity, with people able to send signals back and forth, and we’ve turned yet another page in the science fiction novel we’re all living. While that may be inevitable, it’s still worth considering the implications of our brains moving ever more toward transparency.

Pantanowitz says we should be thinking through those concerns now. "I believe that any [brain] signal that is produced by a person would need to be uploaded on an opt-in basis. Not every person would be keen to 'open-source’ their thought signals (and some of their most private data). So this needs to be treated with some serious consideration, as does the security around systems like these which may appear in the future."

This project, he added, is a proof-of-concept that can catalyze these larger conversations. "I think we are going to be in a place sooner than we imagine with tech that we need to grapple with, and some major considerations are going to arise. This type of project can catalyze these conversations and allow us to face some important questions we need to grapple with sooner."

The Brainternet is just one step forward, but it’s a meaningful step, and it should spark a few questions about where we’re going next.

SOURCE
 

Attachments

  • brainternet01-final.jpg
    brainternet01-final.jpg
    156 KB · Views: 86
attachment.php

BRIEF

  • It took 80 years to discover one of the largest rat species the giant Vika
  • The creature is a foot-and-a-half long and can weigh up to two pounds
  • Only one has been ever been found emerging from a tree cut down by loggers
  • Nuts bearing the characteristic teeth-marks of the giant rat have also been found


Remember the movie The Princess Bride, when the characters debate the existence of R.O.U.S.es (Rodents of Unusual Size), only to be beset by enormous rats? That's kind of what happened here.

Mammalogist Tyrone Lavery heard rumors of a giant, possum-like rat that lived in trees and cracked open coconuts with its teeth on his first trip to the Solomon Islands in 2010. After years of searching and a race against deforestation destroying the rat's would-be home, Lavery, along with John Vendi and Hikuna Judge, finally found it.

"The new species, Uromys vika, is pretty spectacular -- it's a big, giant rat," said Lavery, a post-doctoral researcher at The Field Museum in Chicago and the lead author of the Journal of Mammalogy paper announcing the rat's discovery. "It's the first rat discovered in 80 years from Solomons, and it's not like people haven't been trying -- it was just so hard to find."

The Solomon Islands, a country made up of a series of islands a thousand miles northwest of Australia, are biologically isolated. Over half of the mammals on the Solomon Islands are found nowhere else on Earth, making it an attractive location for scientists like Lavery.

"When I first met with the people from Vangunu Island in the Solomons, they told me about a rat native to the island that they called vika, which lived in the trees," says Lavery. "I was excited because I had just started my Ph.D., and I'd read a lot of books about people who go on adventures and discover new species."

But years of searching didn't turn up any of the giant rats. "I started to question if it really was a separate species, or if people were just calling regular black rats 'vika,'"said Lavery. Part of what made the search so difficult was the rat's tree-dwelling lifestyle. "If you're looking for something that lives on the ground, you're only looking in two dimensions, left to right and forward and backward. If you're looking for something that can live in 30-foot-tall trees, then there's a whole new dimension that you need to search," explains Lavery.

Finally, one of the rats was discovered scurrying out of a felled tree. "As soon as I examined the specimen, I knew it was something different," says Lavery. "There are only eight known species of native rat from the Solomon Islands, and looking at the features on its skull, I could rule out a bunch of species right away." After comparing the specimen to similar species in museum collections and checking the new rat's DNA against the DNA of its relatives, Lavery confirmed that the giant rat was a new species, which he named Uromys vika in honor of the local name for the rat. "This project really shows the importance of collaborations with local people," says Lavery, who learned about the rat through talking with Vangunu locals and confirmed with them that the new rat matched the "vika" they knew.

Vika are a lot bigger than the black rats that spread throughout the world with European colonists -- the rats you'll see in American alleys weigh around 200 grams (0.44 pounds), Solomon Islands rats can be more than four times that size, weighing up to a kilogram (2.2 pounds). And from the tip of its nose to the tip of its tail, U. vika is about a foot and a half long. And while they haven't yet been observed cracking open coconuts, they do have a penchant for chewing circular holes into nuts to get at the meat.

The rat's giant size and possum-like tree-dwelling lifestyle can be traced back to its island home. Islands are full of animals found nowhere else on earth that evolved in isolation from the rest of the world. "Vika's ancestors probably rafted to the island on vegetation, and once they got there, they evolved into this wonderfully new species, nothing like what they came from on the mainland," explains Lavery.

While the rat has only just been discovered, it will quickly be designated as Critically Endangered, due to its rarity and the threat posed by logging to its rainforest habitat. "It's getting to the stage for this rat that, if we hadn't discovered it now, it might never have gotten discovered. The area where it was found is one of the only places left with forest that hasn't been logged," says Lavery. "It's really urgent for us to be able to document this rat and find additional support for the Zaira Conservation Area on Vangunu where the rat lives."Lavery also emphasized the necessity of preserving the rats, not just for ecological reasons, but for the role they play in the lives of Vangunu's people. "These animals are important parts of culture across Solomon Islands -- people have songs about them, and even children's rhymes like our 'This little piggy went to market.'"The discovery marks an important moment in the biological study of the Solomon Islands, especially since vika is so uncommon and close to extinction. "Finding a new mammal is really rare -- there are probably just a few dozen new mammals discovered every year," says Lavery. "Vika was so hard to find, and the fact that I was able to persevere is something that I'm proud of."

###

This study was completed by scientists at The Field Museum and the Zaira Resource Management Area.


SOURCE
 

Attachments

  • giant_rat-final.jpg
    giant_rat-final.jpg
    396.9 KB · Views: 81
attachment.php



Time to add even more candles to life's birthday cake – around another 150 million of them, to be precise.

Rocks from northern Canada have shown signs that life was doing its thing about 3.95 billion years ago, setting a new record for fossils while showing biology was more eager to get started on Earth than we had previously suspected.

Researchers from the University of Tokyo made the discovery by analyzing the compositions of carbon isotopes in sedimentary rocks from the region of Labrador in Canada's north-east.

The kinds of rocks they analysed date to a period of Earth's history called the Eoarchean, a time between 4 and 3.6 billion years ago when the crust was still new and the atmosphere was heavy and relatively free of oxygen.

As you might expect, there aren't many places on the planet's surface that rocks left over from Eoarchean can still be found. Most have been melted beyond recognition, churned back into the mantle, or weathered into dust.

Of those that do remain, few are good candidates for finding signs of ancient chemistry.

One exception to this rule is a strip of rock in Greenland called the Isua Greenstone Belt. Samples of this rock belt point at the chemical signature of life at least 3.7 billion years ago.

The problem is these same clues hadn't been uncovered in rocks taken from similar Eoarchean sites.

Unlike dinosaur teeth or impressions of ancient leaves, ancient life doesn't leave behind much detail to mark its presence. We're not talking the impressions of slimes or outlines of primitive bacteria.

Instead, these suspected fingerprints of early biochemistry are in the form of graphite and carbonate.

By heating the material and analysing the carbon isotopes they contain, researchers can determine whether they are biogenic – representing the fossilised remains of early cells – or are the result of some geochemical process.

Finding biogenic graphite in some ancient rocks but not others has been a cause to pause.

Fortunately, we can now get back on the case, as researchers have found the graphite in 54 metasedimentary Canadian samples from the same period is in fact the product of living systems.

Not only that, the rocks they're found in are older than the Isua specimens by 150 million years, suggesting life was busy rearranging carbon atoms a mere half a billion years after Earth settled into shape.

They paid close attention to the consistency between the crystallisation temperatures of the graphite and the temperatures that heated the sedimentary rock, ruling out contamination at some later date.

Given the 'remains' are little more than chemical shadows of graphite and carbonate, they don't tell us a whole lot about the nature of the organisms that left them behind. At least not on their own.

But they do fit into a bigger picture of how life might have evolved here, while suggesting the hostile conditions of our new-born planet did little to impede the march of life.

That bodes well for our search for living systems on other bodies in our Solar System and beyond.

How ancient life both on Earth and elsewhere functions is still a perplexing mystery.

One theory is that it began largely as competing strands of RNA that folded and recombined until some pockets of nucleic acid soup could co-opt other useful chemical processes.

Others think metabolism-like processes were well underway early in Earth's history, and replicating nucleic acids joined later.

Earlier this year, Australian researchers found hints of life in ancient hot spring deposits dating back to 3.48 billion years ago, posing questions about whether life might have had its starts in less oceanic environments.

It's even possible that today's biosphere stuttered into existence after a series of extinctions and rapid restarts.

For all of the questions that remain, we can be fairly confident that the chemistry of life has been affecting our planet's development practically from the very beginning.

Life among the stars seems more inevitable than ever.

This research was published in Nature.

SOURCE
 

Attachments

  • earlylifeearth-final.jpg
    earlylifeearth-final.jpg
    633.4 KB · Views: 86
attachment.php



Life on Earth began somewhere between 3.7 and 4.5 billion years ago, after meteorites splashed down and leached essential elements into warm little ponds, say scientists at McMaster University and the Max Planck Institute in Germany. Their calculations suggest that wet and dry cycles bonded basic molecular building blocks in the ponds' nutrient-rich broth into self-replicating RNA molecules that constituted the first genetic code for life on the planet.

The researchers base their conclusion on exhaustive research and calculations drawing in aspects of astrophysics, geology, chemistry, biology and other disciplines. Though the "warm little ponds" concept has been around since Darwin, the researchers have now proven its plausibility through numerous evidence-based calculations.

Lead authors Ben K.D. Pearce and Ralph Pudritz, both of the McMaster's Origins Institute and its Department of Physics and Astronomy, say available evidence suggests that life began when the Earth was still taking shape, with continents emerging from the oceans, meteorites pelting the planet - including those bearing the building blocks of life - and no protective ozone to filter the Sun's ultraviolet rays.

"No one's actually run the calculation before," says Pearce. "This is a pretty big beginning. It's pretty exciting."

"Because there are so many inputs from so many different fields, it's kind of amazing that it all hangs together," Pudritz says. "Each step led very naturally to the next. To have them all lead to a clear picture in the end is saying there's something right about this."

Their work, with collaborators Dmitry Semenov and Thomas Henning of the Max Planck Institute for Astronomy, has been published today in the Proceedings of the National Academy of Science.

"In order to understand the origin of life, we need to understand Earth as it was billions of years ago. As our study shows, astronomy provide a vital part of the answer. The details of how our solar system formed have direct consequences for the origin of life on Earth," says Thomas Henning, from the Max Planck Institute for Astronomy and another co-author.

The spark of life, the authors say, was the creation of RNA polymers: the essential components of nucleotides, delivered by meteorites, reaching sufficient concentrations in pond water and bonding together as water levels fell and rose through cycles of precipitation, evaporation and drainage. The combination of wet and dry conditions was necessary for bonding, the paper says.

In some cases, the researchers believe, favorable conditions saw some of those chains fold over and spontaneously replicate themselves by drawing other nucleotides from their environment, fulfilling one condition for the definition of life. Those polymers were imperfect, capable of improving through Darwinian evolution, fulfilling the other condition.

"That's the Holy Grail of experimental origins-of-life chemistry," says Pearce.

That rudimentary form of life would give rise to the eventual development of DNA, the genetic blueprint of higher forms of life, which would evolve much later. The world would have been inhabited only by RNA-based life until DNA evolved.

"DNA is too complex to have been the first aspect of life to emerge," Pudritz says. "It had to start with something else, and that is RNA."

The researchers' calculations show that the necessary conditions were present in thousands of ponds, and that the key combinations for the formation of life were far more likely to have come together in such ponds than in hydrothermal vents, where the leading rival theory holds that life began in roiling fissures in ocean floors, where the elements of life came together in blasts of heated water. The authors of the new paper say such conditions were unlikely to generate life, since the bonding required to form RNA needs both wet and dry cycles.

The calculations also appear to eliminate space dust as the source of life-generating nucleotides. Though such dust did indeed carry the right materials, it did not deposit them in sufficient concentration to generate life, the researchers have determined. At the time, early in the life of the solar system, meteorites were far more common, and could have landed in thousands of ponds, carrying the building blocks of life.Pearce and Pudritz plan to put the theory to the test next year, when McMaster opens its Origins of Life laboratory that will re-create the pre-life conditions in a sealed environment.

"We're thrilled that we can put together a theoretical paper that combines all these threads, makes clear predictions and offers clear ideas that we can take to the laboratory," Pudritz says.


MORE:
Studying the roots of life


SOURCE
 

Attachments

  • meteorabiogenesis-final.jpg
    meteorabiogenesis-final.jpg
    409.5 KB · Views: 81
Stretchy glue closes wounds in just 60 SECONDS


Biomedical engineers from the University of Sydney and the United States have collaborated on the development of a potentially life-saving surgical glue, called MeTro.

A highly elastic and adhesive surgical glue that quickly seals wounds without the need for common staples or sutures could transform how surgeries are performed.

MeTro’s high elasticity makes it ideal for sealing wounds in body tissues that continually expand and relax – such as lungs, hearts and arteries – that are otherwise at risk of re-opening.

The material also works on internal wounds that are often in hard-to-reach areas and have typically required staples or sutures due to surrounding body fluid hampering the effectiveness of other sealants.





- - - Updated - - -





Well, here it is—it's really happening in our lifetime. Jetsons, here we come!...


Airbus’ Electric Flying Taxis Are Set to Take to the Skies Next Year




 
attachment.php


Scientists using NASA's Mars Reconnaissance Orbiter spacecraft, in orbit above the Martian surface, have made a surprising discovery: an ancient dried-up lake bed that once held 10 times as much water as all of the Great Lakes combined. There's a possibility that this location saw the evolution of life billions of years ago, and may offer clues as to how life arose on our own planet.

Observations by the Mars Reconnaissance Orbiter revealed that buried underneath the Eridania basin lay massive deposits of minerals. Further analysis of those minerals suggests that they were formed by volcanically heated underwater vents. Billions of years later, those volcanoes have gone extinct and the lake has dried up, but the mineral deposits remain.

"This site gives us a compelling story for a deep, long-lived sea and a deep-sea hydrothermal environment," says NASA's Paul Niles. "It is evocative of the deep-sea hydrothermal environments on Earth, similar to environments where life might be found on other worlds—life that doesn't need a nice atmosphere or temperate surface, but just rocks, heat and water."

That makes the Eridania basin one of the more important regions on Mars to search for life. If we do find life there—or even traces of it—it could tell us more about what the first life may have looked like on Earth. Even if we never find life there, these results are still an invaluable look at the conditions that the first life formed in.

Even if we never find evidence that there's been life on Mars, this site can tell us about the type of environment where life may have begun on Earth," says Niles. "Volcanic activity combined with standing water provided conditions that were likely similar to conditions that existed on Earth at about the same time—when early life was evolving here."


SOURCE
 

Attachments

  • marsmassivelake-final.jpg
    marsmassivelake-final.jpg
    524.5 KB · Views: 66
attachment.php



For a long while it wasn't clear whether the island's native population originated in Polynesia or South America.

And how can we explain its apparent paradox: the design, construction and transport of giant "moai" stone statues, a remarkable cultural achievement yet one carried out on a virtually barren island, which seemingly lacked both the resources and people to carry out such a feat?

Anthropologists have long wondered whether these seemingly simple inhabitants really had the capacity for such cultural complexity.

Or was a more advanced population, perhaps from the Americas, actually responsible – one that subsequently wiped out all the natural resources the island once had?

Recently, Rapa Nui has become the ultimate parable for humankind's selfishness; a moral tale of the dangers of environmental destruction.

In the "ecocide" hypothesis popularised by the geographer Jared Diamond, Rapa Nui is used as a demonstration of how society is doomed to collapse if we do not sit up and take note.

But more than 60 years of archaeological research actually paints a very different picture – and now new genetic data sheds further light on the island's fate. It is time to demystify Rapa Nui.

The 'ecocide' narrative doesn't stand up

The ecocide hypothesis centres on two major claims. First, that the island's population was reduced from several tens of thousands in its heyday, to a diminutive 1,500-3,000 when Europeans first arrived in the early 18th century.

Second, that the palm trees that once covered the island were callously cut down by the Rapa Nui population to move statues.



attachment.php

Europeans inspect the statues, around a century after first contact. Carlo Bottigella (1827)​


With no trees to anchor the soil, fertile land eroded away resulting in poor crop yields, while a lack of wood meant islanders couldn't build canoes to access fish or move statues. This led to internecine warfare and, ultimately, cannibalism.

The question of population size is one we still cannot convincingly answer. Most archaeologists agree on estimates somewhere between 4,000 and 9,000 people, although a recent study looked at likely agricultural yields and suggested the island could have supported up to 15,000.

But there is no real evidence of a population decline prior to the first European contact in 1722. Ethnographic reports from the early 20th century provide oral histories of warfare between competing island groups.

The anthropologist Thor Heyerdahl – most famous for crossing the Pacific in a traditional Inca boat – took these reports as evidence for a huge civil war that culminated in a battle of 1680, where the majority of one of the island's tribes was killed.

Obsidian flakes or "mata'a" littering the island have been interpreted as weapon fragments testifying to this violence.

However, recent research lead by Carl Lipo has shown that these were more likely domestic tools or implements used for ritual tasks.

Surprisingly few of the human remains from the island show actual evidence of injury, just 2.5 percent, and most of those showed evidence of healing, meaning that attacks were not fatal. Crucially, there is no evidence, beyond historical word-of-mouth, of cannibalism.

It's debatable whether 20th century tales can really be considered reliable sources for 17th-century conflicts.

What really happened to the trees

More recently, a picture has emerged of a prehistoric population that was both successful and lived sustainably on the island up until European contact.

It is generally agreed that Rapa Nui, once covered in large palm trees, was rapidly deforested soon after its initial colonisation around 1200 AD. Although micro-botanical evidence, such as pollen analysis, suggests the palm forest disappeared quickly, the human population may only have been partially to blame.

The earliest Polynesian colonisers brought with them another culprit, namely the Polynesian rat. It seems likely that rats ate both palm nuts and sapling trees, preventing the forests from growing back.

But despite this deforestation, my own research on the diet of the prehistoric Rapanui found they consumed more seafood and were more sophisticated and adaptable farmers than previously thought.

Blame slavers – not lumberjacks

So what – if anything – happened to the native population for its numbers to dwindle and for statue carving to end? And what caused the reports of warfare and conflict in the early 20th century?

The real answer is more sinister. Throughout the 19th century, South American slave raids took away as much as half of the native population. By 1877, the Rapanui numbered just 111.

Introduced disease, destruction of property and enforced migration by European traders further decimated the natives and lead to increased conflict among those remaining. Perhaps this, instead, was the warfare the ethnohistorical accounts refer to and what ultimately stopped the statue carving.

It had been thought that South Americans made contact with Rapa Nui centuries before the Europeans, as their DNA can be detected in modern native inhabitants. I have been involved in a new study, however, led by paleogeneticist Lars Fehren-Schmitz, which questions this timeline.

We analysed Rapanui human remains dating to before and after European contact.

Our work, published in the journal Current Biology, found no significant gene flow between South America and Easter Island before 1722. Instead, the considerable recent disruption to the island's population may have impacted on modern DNA.

Perhaps, then, the takeaway from Rapa Nui should not be a story of ecocide and a Malthusian population collapse.

Instead, it should be a lesson in how sparse evidence, a fixation with "mysteries", and a collective amnesia for historic atrocities caused a sustainable and surprisingly well-adapted population to be falsely blamed for their own demise.

And those statues? We know how they moved them; the local population knew all along. They walked – all we needed to do was ask.

Catrine Jarman, PhD researcher in Archaeology and Anthropology, University of Bristol.

This article was originally published by The Conversation. Read the original article.



SOURCE
 

Attachments

  • easter_island-final.jpg
    easter_island-final.jpg
    423.8 KB · Views: 62
  • convo-easter-island-art-sized.jpg
    convo-easter-island-art-sized.jpg
    66.6 KB · Views: 60
Last edited:
attachment.php



Twelve thousand years ago everybody lived as hunters and gatherers. But by 5,000 years ago most people lived as farmers.

This brief period marked the biggest shift ever in human history with unparalleled changes in diet, culture and technology, as well as social, economic and political organisation, and even the patterns of disease people suffered.

While there were upsides and downsides to the invention of agriculture, was it the greatest blunder in human history? Three decades ago Jarred Diamond thought so, but was he right?

Agriculture developed worldwide within a single and narrow window of time: between about 12,000 and 5,000 years ago. But as it happens it wasn’t invented just once but actually originated at least seven times, and perhaps 11 times, and quite independently, as far as we know.

Farming was invented in places like the Fertile Crescent of the Middle East, the Yangzi and Yellow River Basins of China, the New Guinea highlands, in the Eastern USA, Central Mexico and South America, and in sub-Saharan Africa.

And while its impacts were tremendous for people living in places like the Middle East or China, its impacts would have been very different for the early farmers of New Guinea.

The reasons why people took up farming in the first place remain elusive, but dramatic changes in the planet’s climate during the last Ice Age — from around 20,000 years ago until 11,600 years ago — seem to have played a major role in its beginnings.

The invention of agriculture thousands of years ago led to the domestication of today’s major food crops like wheat, rice, barley, millet and maize, legumes like lentils and beans, sweet potato and taro, and animals like sheep, cattle, goats, pigs, alpacas and chickens.

It also dramatically increased the human carrying capacity of the planet. But in the process the environment was dramatically transformed. What started as modest clearings gave way to fields, with forests felled and vast tracts of land turned over to growing crops and raising animals.

In most places the health of early farmers was much poorer than their hunter-gatherer ancestors because of the narrower range of foods they consumed alongside of widespread dietary deficiencies.

At archaeological sites like Abu Hereyra in Syria, for example, the changes in diet accompanying the move away from hunting and gathering are clearly recorded. The diet of Abu Hereyra’s occupants dropped from more than 150 wild plants consumed as hunter-gatherers to just a handful of crops as farmers.

In the Americas, where maize was domesticated and heavily relied upon as a staple crop, iron absorption was consequently low and dramatically increased the incidence of anaemia. While a rice based diet, the main staple of early farmers in southern China, was deficient in protein and inhibited vitamin A absorption.

There was a sudden increase in the number of human settlements signalling a marked shift in population. While maternal and infant mortality increased, female fertility rose with farming, the fuel in the engine of population growth.

The planet had supported roughly 8 million people when we were only hunter-gatherers. But the population exploded with the invention of agriculture climbing to 100 million people by 5,000 years ago, and reaching 7 billion people today.

People began to build settlements covering more than ten hectares - the size of ten rugby fields - which were permanently occupied. Early towns housed up to ten thousand people within rectangular stone houses with doors on their roofs at archaeological sites like Çatalhöyük in Turkey.

By way of comparison, traditional hunting and gathering communities were small, perhaps up to 50 or 60 people.

Crowded conditions in these new settlements, human waste, animal handling and pest species attracted to them led to increased illness and the rapid spread of infectious disease.

Today, around 75% of infectious diseases suffered by humans are zoonoses, ones obtained from or more often shared with domestic animals. Some common examples include influenza, the common cold, various parasites like tapeworms and highly infectious diseases that decimated millions of people in the past such as bubonic plague, tuberculosis, typhoid and measles.

In response, natural selection dramatically sculpted the genome of these early farmers. The genes for immunity are over-represented in terms of the evidence for natural selection and most of the changes can be timed to the adoption of farming. And geneticists suggest that 85% of the disease-causing gene variants among contemporary populations arose alongside the rise and spread of agriculture.

In the past, humans could only tolerate lactose during childhood, but with the domestication of dairy cows natural selection provided northern European farmers and pastoralist populations in Africa and West Asia the lactase gene. It’s almost completely absent elsewhere in the world and it allowed adults to tolerate lactose for the first time.

Starch consumption is also feature of agricultural societies and some hunter-gatherers living in arid environments. The amylase genes, which increase people’s ability to digest starch in their diet, were also subject to strong natural selection and increased dramatically in number with the advent of farming.

Another surprising change seen in the skeletons of early farmers is a smaller skull especially the bones of the face. Palaeolithic hunter-gatherers had larger skulls due to their more mobile and active lifestyle including a diet which required much more chewing.

Smaller faces affected oral health because human teeth didn’t reduce proportionately to the smaller jaw, so dental crowding ensued. This led to increased dental disease along with extra cavities from a starchy diet.

Living in densely populated villages and towns created for the first time in human history private living spaces where people no longer shared their food or possessions with their community.

These changes dramatically shaped people’s attitudes to material goods and wealth. Prestige items became highly sought after as hallmarks of power. And with larger populations came growing social and economic complexity and inequality and, naturally, increasing warfare.

Inequalities of wealth and status cemented the rise of hierarchical societies — first chiefdoms then hereditary lineages which ruled over the rapidly growing human settlements.

Eventually they expanded to form large cities, and then empires, with vast areas of land taken by force with armies under the control of emperors or kings and queens.

This inherited power was the foundation of the ‘great’ civilisations that developed across the ancient world and into the modern era with its colonial legacies that are still very much with us today.

No doubt the bad well and truly outweighs all the good that came from the invention of farming all those millenia ago. Jarred Diamond was right, the invention of agriculture was without doubt the biggest blunder in human history. But we’re stuck with it, and with so many mouths to feed today we have to make it work better than ever. For the future of humankind and the planet.


SOURCE





MORE:

The rise of agricultural states came at a big cost, a new book argues
 

Attachments

  • ancient farming_final.jpg
    ancient farming_final.jpg
    259.4 KB · Views: 55
Last edited:
attachment.php


When I was 12, on family vacation in New Mexico, I watched a group of elaborately-costumed Navajo men belt out one intimidating song after the next. They executed a set of beautifully coordinated dance turns to honour the four cardinal directions, each one symbolising sacred gifts from the gods. Yet the tourist-packed audience lost interest and my family, too, prepared to leave. Then, all of a sudden, the dancers were surprised by a haunting, muscled old man adorned with strange pendants, animal skulls, and scars etching patterns into his body and face.

Because the dancers were obviously terrified of this man I, too, became afraid and wanted to run, but we all stood rooted to the spot as he walked silently and majestically into the desert night. Afterwards, the lead dancer apologised profusely for the tribe’s shaman, or medicine man: he was holy but a bit eccentric. My 12-year-old self wondered how one might become like this extraordinary individual, so singular, respected and brave he could take the desert night alone.

That question has fuelled much of my neuroscience through the years. As I studied the brain, I found that the right arrangement of neural circuitry and chemistry could generate astonishingly creative and holy persons on the one hand, or profoundly delusional, even violent, fanatics on the other. To intensify the ‘god effect’ in people already attracted to religious ideas, my studies revealed, all we had to do was boost the activity of the neurotransmitter, dopamine, crucial for balanced emotion and thought, on the right side of the brain. But should dopamine spike too high, murderous impulses like terrorism and jihad could rear up instead.

Evidence that religion can produce extraordinary behaviours goes back to the dawn of human history, when our ancestors started burying the dead and produced remarkable, ritual art on cave walls. One of the first signs of religious consciousness dates to the upper Paleolithic, some 25,000 years ago, when a boy, also about age 12, crawled through hundreds of metres of pitch black, deep cave space, probably guided only by a flickering flame held in one hand and some fleetingly illuminated paintings on cave walls. When the boy reached a cul-de-sac in the bowels of the cave, he put red ochre onto his hand and made a print on the wall. Then he climbed out of the cul-de-sac and – we can surmise, given his skill and the fact that his bones have not been found – made it out alive.

But where did this boy get his courage? And why leave a handprint on the wall of a remote cave deep in the bowels of the earth? Some experts in cave art think the boy was performing a religious obligation. He, like others who made similar treks into the caves, was leaving a votive offering to the spirit world or gods and becoming a holy man – much like the majestic and terrifying Indian man I had seen when I was 12.

Dopamine probably fuelled his brain.

Throughout the centuries, bountiful dopamine has given rise to gifted leaders and peacemakers (Gandhi, Martin Luther King, Catherine of Siena), innovators (Zoroaster), seers (the Buddha), warriors (Napoleon, Joan of Arc), teachers of whole civilisations (Confucius) and visionaries (Laozi). Some of them founded not only enduring religious traditions but also profoundly influenced the cultures and civilisations associated with those traditions. But dopamine-fuelled religion has also unleashed monsters: Jim Jones (the ‘minister’ who persuaded hundreds of his followers to commit suicide) and the cult Aum Shinrikyo, whose leader had his adherents release sarin gas on the subways of Japan. Think of the fanatic terrorists of al Qaeda, who gave their lives to attack New York’s twin towers and the Pentagon on 11 September 2001.

The neurological line between the saint and the savage, the creative and the unconscionable, turns out to be razor-thin

As 9/11 suggests, the neurological line between the saint and the savage, the creative and the unconscionable, turns out to be razor-thin. Just look at the bounty of evidence showing that families of extraordinarily creative individuals often include members with histories of insanity, sometimes even criminal insanity. Genes that produce brains capable of unusually creative associations or ideas are also more likely to produce (in the same individual or in members of his/her family) brains vulnerable to loose or bizarre associations.

The medical literature abounds with descriptions of creative bursts following infusion of dopamine-enhancing drugs such as l-dopa (levodopa), used to treat Parkinson’s Disease (PD). Bipolar illness, which sends sufferers into prolonged bouts of dopamine-fuelled mania followed by devastating spells of depressive illness, can sometimes produce work of amazing virtuosity during the manic phase. Often these individuals refuse to take anti-dopamine drugs that can prevent the manic episodes precisely because they value the creative activity of which they are capable during these altered states.

Hallucinogenic drugs such as Psilocybin and LSD, which indirectly stimulate dopamine activity in the brain’s frontal lobes, can produce religious experience even in the avowedly non-religious. These hallucinogens produce vivid imagery, sometimes along with near psychotic breaks or intense spiritual experience, all tied to stimulation of dopamine receptors on neurons in the limbic system, the seat of emotion located in the midbrain, and in the prefrontal cortex, the upper brain that is the centre of complex thought.

Given all these fascinating correlations, sometime after the attack on the twin towers in New York City, I began to hypothesise that dopamine might provide a simple explanation for the paradoxical god effect. When dopamine in the limbic and prefrontal regions of the brain was high, but not too high, it would produce the ability to entertain unusual ideas and associations, leading to heightened creativity, inspired leadership and profound religious experience. When dopamine was too high, however, it would produce mental illness in genetically vulnerable individuals. In those who had been religious before, fanaticism could be the result.

While pursuing these ideas, I had a lucky break during routine office hours at the VA (Veterans Administration) Boston Healthcare System, where I regularly treat US veterans. I was doing a routine neuropsychological examination of a tall, distinguished elderly man with Parkinson’s Disease. This man was a decorated Second World War veteran and obviously intelligent. He had made his living as a consulting engineer but had slowly withdrawn from the working world as his symptoms progressed. His withdrawal was selective: he did not quit everything, his wife explained. ‘Just social parts of his work, some physical stuff and unfortunately his private religious devotions.’

When I asked what she meant by ‘devotions’ she replied that he used to pray and read his Bible all the time, but since the onset of the disease he had done so less and less. When I asked the patient himself about his religious interests, he replied that they seemed to have vanished. What was so striking was that he said he was quite unhappy about that fact. What appeared to be keeping him from his ‘devotions’ was that he found them ‘hard to fathom’. He had not stopped wanting to believe and practise his religion but simply found it more difficult to do so.

This was a man whose intelligence was above average, who apparently had been religious all his life and who could easily answer questions about religious ideas and doctrines. It was not an intellectual deficit that was the problem. When I asked him directly whether he had now rejected religion as false he said: ‘By no means!’ The difficulty he had was accessing his religious memories, feelings and experiences, in particular. Other equally complex ideas were still easily available to him, but religion as a sphere of interest for this man was nearly impossible.

The primary pathology associated with PD is a loss of dopamine activity, hypothesised for years to drive ‘hedonic reward’ or pleasure – that sense of well-being we all feel when we indulge in an experience like good food or sex. Whenever dopamine release occurred, proponents held, we would get a small hit of pleasure. That story made sense because many drugs of abuse, such as cocaine or amphetamine, stimulate dopamine activity in the midbrain.

But recent research had revealed something more complex. A Cambridge University neuroscientist named Wolfram Schultz had shown that dopamine was not a simple pleasure molecule, delivering a simple reward. Instead, it alerted us only to unexpected rewards, spiking when the prize delivered far exceeded the expected result.

Unexpected visions can define the most innovative artists, the most divergent philosophers and anyone who finds a sense of ecstasy in the beauty and strangeness of the world

To tease out this nuance, Schultz used a simple experimental design: he delivered varying quantities of fruit juice to monkeys while simultaneously recording activity in the monkey’s midbrain, the seat of emotion, where dopamine neurons were dense. He found that the neurons fired most intensely not when the monkey got a juice reward but when that reward was unexpectedly large. In short, dopamine neurons were oriented towards the pickup of new and significant rewards, novelty of the highest order for the individual. Since Schultz’s pioneering work in the midbrain, others have mapped out similar signalling patterns when dopamine activity moves into the prefrontal lobes, which mediate the most complex of thinking and creative processes, unique to humans.

But how would these new findings explain my PD patient’s difficulty in accessing religious ideas? Suppose religion created spectacular individuals because it pushed them into looking for unexpected rewards – a sense of transcendence or the pleasure of doing good – rather than all the usual rewards such as money or sex that the rest of us constantly pursue. Pursuit of unusual ideas could likewise be facilitated by dopamine, heightening creativity, too.

Here, I thought, was where science and religion actually meet. Like the most creative scientists, the most consistently religious individuals would be motivated only by things that consistently triggered surging dopamine and the unexpected rewards circuits in the prefrontal lobes of the brain: awe, fear, reverence and wonder. Such unexpected visions would define the most innovative artists, the most divergent philosophers and anyone who could find a sense of ecstasy in the beauty and strangeness of the world. In the genetically vulnerable, it would take just a little more juice to activate homicidal fanatics like the terrorists of 9/11.

Again, I tested these ideas on my PD patients. After screening for ‘religiousness’ through a questionnaire given to 71 vets in all, I found that a pattern was emerging. Of those with religious leanings prior to getting sick, only a subgroup lost religious fervour after illness set in. These were patients with ‘left-onset disease’ – meaning that their muscle problems had begun on the left side of the body, correlating with dysfunction in the right prefrontal regions of the brain. Those with left-onset disease reported significantly lower scores on all dimensions of religiosity (spiritual experiences, daily rituals, prayer and meditation) compared to those with right-onset disease.

How could I explain these results? I surmised it was loss of dopamine in the right half of the brain. To test that hypothesis, my team devised a ‘priming’ experiment to see if PD patients could access religious concepts as easily as other, equally complex ideas. To conduct priming experiments, you typically ‘prime’ or briefly present, a person with a word semantically related to a second, target word. For instance, the word ‘rose’ can be used as a prime for ‘violet’. The target word, ‘violet’, will be more quickly recognised following a prime with ‘rose’ than it would be if the prime had been something unrelated, such as ‘stamp’.

In our priming experiments with religious words we found that healthy volunteers were much quicker to recognise ‘worship God’ as a valid phrase when they were first presented with ‘pray quietly’ as compared with a control phrase. But this did not hold for PD patients with left-onset disease and damage on the right side of the brain! Relative to both right-onset PD patients and healthy volunteers, patients with left-onset disease did not benefit from subliminal presentation of the religious phrase, although their priming patterns for non-religious control phrases such as ‘pay taxes’ and ‘serve jury’ were normal in every way. These findings convinced me that my theory was at least partially correct: the dopamine receptors responsible for that transcendent, outsize sense of reward were dysfunctional on the right side of the brain.

But I still had to rule out competing ideas. One long-standing theory, suggested most prominently by Freud, ascribes religious commitment to anxiety. Put simply, the theory says that religion with its promise of an afterlife quells the free-floating anxiety caused by fear of death. This was a problem for me, given that my unexpected rewards theory of religion predicts the exact opposite: rather than eschewing fear, the religious would seek it out as one of the most novel, exciting and intense emotions served up by the brain.

I therefore attempted to pit the two theories against one another in another priming experiment. Over the course of several interview sessions, I told my PD patients a short story about a person climbing stairs in a hospital, ultimately coming across a surprise. Only the last sentence differed from one version to the next. In one ending the person witnessed a death; in a second he witnessed a religious ritual; and in the third, he saw a breathtaking view of the ocean. After participants were presented with each of these primes, they were tested for subtle changes in attitudes to religious belief by rating agreement with the statements ‘God or some supreme being exists’ and ‘God is an active agent in the world’.

In healthy volunteers and right- but not left-onset patients, religious belief-scores significantly increased following the aesthetic prime consisting of the ocean view (a wonderful reward) but not the death prime. (The religious ritual prime increased religious belief only inconsistently, with little impact compared with that of the ocean view.) The results directly refuted the anxiety theory of religion while supporting the notion that religiosity was spurred by the quest for unexpected reward.

What does all this say about how religion can produce both extraordinarily life-giving and generative human beings (holy men and women) and extraordinary monsters? The same mechanism that enhances our creativity – juicing up the right-sided limbic and prefrontal brain regions with dopamine – also opens us up to religious ideas and experience. But if these brain circuits are pushed too far, thinking becomes not merely divergent but outright deviant and psychotic.

Since at least the upper Paleolithic, religious cultures have been shaping, channelling and nourishing the quest for unexpected rewards. Today, such cultures as science, art, music, literature and philosophy confer the same sense of transcendence that religion has done in the past. The trick is tapping the god effect, inducing that altered state of wonder and making the breakthrough to greatness without going over the edge.



ABOUT THE AUTHOR
Patrick McNamara is associate professor of neurology and psychiatry at the Boston University School of Medicine and a professor at Northcentral University. He has published numerous articles in peer-reviewed journals and several books on the science of sleep and dreams, and on the psychology and neurology of religion. He is also a founding director of the Institute for the Biocultural Study of Religion.


SOURCE


The Dopamine Switch Between Atheist, Believer, and Fanatic
 

Attachments

  • header_Religiosity-final.jpg
    header_Religiosity-final.jpg
    266.4 KB · Views: 52
attachment.php



Elon Musk once described the sensational advances in artificial intelligence as “summoning the demon.” Boy, how the demon can play Go.

The AI company DeepMind announced last week it had developed an algorithm capable of excelling at the ancient Chinese board game. The big deal is that this algorithm, called AlphaGo Zero, is completely self-taught. It was armed only with the rules of the game — and zero human input.

AlphaGo, its predecessor, was trained on data from thousands of games played by human competitors. The two algorithms went to war, and AGZ triumphed 100-nil. In other words — put this up in neon lights — disregarding human intellect allowed AGZ to become a supreme exponent of its art.

While DeepMind is the outfit most likely to feed Mr Musk’s fevered nightmares, machine autonomy is on the rise elsewhere. In January, researchers at Carnegie-Mellon University unveiled an algorithm capable of beating the best human poker players. The machine, called Libratus, racked up nearly $2m in chips against top-ranked professionals of Heads-Up No-Limit Texas Hold ‘em, a challenging version of the card game. Flesh-and-blood rivals described being outbluffed by a machine as “demoralising”. Again, Libratus improved its game by detecting and patching its own weaknesses, rather than borrowing from human intuition.


Once fast, deep algorithms are unshackled from the slow, shallow disappointment of human intellect, they can begin crunching problems that our own lackluster species has not confronted

AGZ and Libratus are one-trick ponies but technologists dream of machines with broader capabilities. DeepMind, for example, declares it wants to create “algorithms that achieve superhuman performance in the most challenging domains with no human input”. Once fast, deep algorithms are unshackled from the slow, shallow disappointment of human intellect, they can begin crunching problems that our own lacklustre species has not confronted. Rather than emulating human intelligence the top tech thinkers toil daily to render it unnecessary.

For that reason, we might one day look back on AGZ and Libratus as baby steps towards the Singularity, the much-debated point at which AI becomes super-intelligent, able to control its own destiny without recourse to human intervention. The most dystopian scenario is that AI becomes an existential risk.

Suppose that super-intelligent machines calculate, in pursuit of their programmed goals, that the best course of action is to build even cleverer successors. A runaway iteration takes hold, racing exponentially into fantastical realms of calculation.

One day, these goal-driven paragons of productivity might also calculate, without menace, that they can best fulfil their tasks by taking humans out of the picture. As others have quipped, the most coldly logical way to beat cancer is to eliminate the organisms that develop it. Ditto for global hunger and climate change.

These are riffs on the paper-clip thought experiment dreamt up by philosopher Nick Bostrom, now at the Future of Humanity Institute at Oxford university. If a hyper-intelligent machine, devoid of moral agency, was programmed solely to maximise the production of paper clips, it might end up commandeering all available atoms to this end. There is surely no sadder demise for humanity than being turned into office supplies. Professor Bostrom’s warning articulates the capability caution principle, a well-subscribed idea in robotics that we cannot necessarily assume the upper capabilities of AI.

It is of course pragmatic to worry about job displacement: many of us, this writer included, are paid for carrying out a limited range of tasks. We are ripe for automation. But only fools contemplate the more distant future without anxiety — when machines may out-think us in ways we do not have the capacity to imagine.



SOURCE
 

Attachments

  • demongoplayer-final.jpg
    demongoplayer-final.jpg
    669.4 KB · Views: 92
attachment.php



The world has enough nuclear stockpile to ensure we could destroy this planet many times over.
But they don't come in awe-inspiring, beautiful package. Until now. Imagine dying in a blaze of glory. We're dead, but it came in a grand blast. Spread the word.

The United States Department of Defence has suggested it is investing in the modernisation of its missile defences, but pretty soon these could be useless.

Global powers, including the United States and Russia, have been working on the concept of hypersonic missiles for over 50 years. At the moment, only preliminary tests of the systems needed for the missiles to work have been successful. Although experts at the RAND Corporation expect these weapons will become a reality in the next decade. When they do, there could be devastating consequences.


What are hypersonic missiles?
“Hypersonic missiles combine the speed of ballistic missiles with the accuracy and manoeuvrability of cruise missiles,” says Patricia Lewis, research director of International Security at Chatham House. This means that they're not only fast (as their name implies) but nimble too.

The missiles are designed to travel at approximately 5,000 to 25,000 kilometres per hour , or one to five miles per second, a report for the RAND Corporation explains. In other words, the missiles fly more than 25-times faster than modern airplanes.

They're made of two components: Hypersonic Glide Vehicles (HGVs) and hypersonic cruise missiles (HCMs). HGVs are unpowered vehicles that glide to their target at the top of the atmosphere, reaching between 40km to 100km in altitude.

HGVs are launched on an Inter-Continental Ballistic Missiles, or ICBM. Once released they glide along the upper edge of the atmosphere, manoeuvring toward their target, says Richard Moore, senior engineer and director of RAND’s Washington office.

It's possible to keep the target a secret until the last few seconds of their flight because of the uniquely low trajectory they take in the atmosphere. It is then that missile makes its final dive.

As they near their target, they may continue to manoeuvre to avoid defences and strike whatever target they're going after. It is expected the missiles will operate at speeds between Mach 8 and Mach 20 – or between 6,200 miles per hour and 15,300 miles per hour.

However, Hypersonic cruise missiles (HCMs), use a solid-fuelled booster to accelerate to at least Mach 4. As they approach or achieve hypersonic speeds, (Mach 5 or 3,800 miles per hour), the booster on the missile will fall away and their more fuel-efficient, supersonic combustion jet engine, named SCRAMJET, ignites.

In the atmosphere, this completes the acceleration to a higher speed, and then provides sustained power for the cruise portion of the missile’s flight.

These HCMs fly much higher than airplanes, but lower than HGVs because they need the air to be thick enough to feed their SCRAMJET engines. Like HGVs, these missiles also manoeuvre to a height that will evade defences, and will keep their target a secret until the last few seconds of their flight. Their flight altitudes are between a few tens of kilometres and 100 kilometres.

Another unique feature of the hypersonic missiles is that they can carry either nuclear weapons or conventional warheads, making them more versatile than previous missiles.




What makes these different from the average ICBM?
ICBMs travel much higher than hypersonic missiles, although they travel at roughly the same speed. ICBMs have little manoeuvrability, because they travel so high, which makes them quite predictable.

Because ICBMs travel at much higher altitudes than HGVs and HCMs, they can be detected by missile ground sensors much sooner, says Moore, and so deflected. Also, because ICBMs are rocket-fuelled, their smoke is easily detectable from ballistic missile sensors in space. For the same reason, air-fuelled HGVs and HCMs are largely undetectable.


What could the impact of this be?
Currently, hypersonic missiles are being created by the United States, Russia and China, while Australia, France and India have all been reported to be developing the technology.

“The ability to easily circumvent enemy air defenses and deliver an accurate conventional weapon to a target anywhere on the planet within one hour would fill a big gap in current military capabilities," Lewis says. “They will be capable of evading current missile defences and would result in hair-trigger responses” she adds.

The United States especially has a variety of early warning satellites and missile defence systems which would be rendered largely useless.

Most technologically-advanced countries capable of developing hypersonic missiles would therefore be able to challenge the major powers. This means that the distribution of these missiles could create major geopolitical instabilities.

“The spread of hypersonic missiles, unless halted through export controls or a new treaty, will mean that a wider range of small countries will be able to threaten larger military powers such as China, Russia or the US,” Lewis says.

“As a result, regional powers that are trigger-happy can generate military conflicts that threaten US friends and allies and draw in major powers,” says Richard Speier, adjunct political scientist at RAND.

Bruce Blair, nuclear security expert at Princeton University, highlights that the capabilities of the missiles could have both a positive and a negative impact. “This weapon could be used to rapidly attack terrorist hideouts or missiles being fueled on their launch pads or highly defended inland facilities such as anti-satellite missiles sites in China,” he says.

“Without such a tool, the United States might only have a nuclear option to deal with these threats and so hypersonic vehicles could raise the nuclear threshold during conflict,” he adds. “On the other hand it could lower the threshold for conventional conflict and make the outbreak of war more likely.”

Kingston Reif, director for disarmament and threat reduction policy at the Arms Control Association, also emphasises the ability of hypersonic missiles to confuse and accelerate military conflict. “A big concern is the launch of a conventionally-armed hypersonic missile could be confused with a nuclear armed one,” he says. “Given that hypersonic weapons are still under development, there is a critical need to develop measures to constrain the development of these weapons.”




SOURCE
 

Attachments

  • hypersonic missile-final.jpg
    hypersonic missile-final.jpg
    147.2 KB · Views: 84
attachment.php



Mainstream philosophy in the so-called West is narrow-minded, unimaginative, and even xenophobic. I know I am levelling a serious charge. But how else can we explain the fact that the rich philosophical traditions of China, India, Africa, and the Indigenous peoples of the Americas are completely ignored by almost all philosophy departments in both Europe and the English-speaking world?

Western philosophy used to be more open-minded and cosmopolitan. The first major translation into a European language of the Analects, the saying of Confucius (551-479 BCE), was done by Jesuits, who had extensive exposure to the Aristotelian tradition as part of their rigorous training. They titled their translation Confucius Sinarum Philosophus, or Confucius, the Chinese Philosopher (1687).

One of the major Western philosophers who read with fascination Jesuit accounts of Chinese philosophy was Gottfried Wilhelm Leibniz (1646-1716). He was stunned by the apparent correspondence between binary arithmetic (which he invented, and which became the mathematical basis for all computers) and the I Ching, or Book of Changes, the Chinese classic that symbolically represents the structure of the Universe via sets of broken and unbroken lines, essentially 0s and 1s. (In the 20th century, the psychoanalyst Carl Jung was so impressed with the I Ching that he wrote a philosophical foreword to a translation of it.) Leibniz also said that, while the West has the advantage of having received Christian revelation, and is superior to China in the natural sciences, ‘certainly they surpass us (though it is almost shameful to confess this) in practical philosophy, that is, in the precepts of ethics and politics adapted to the present life and the use of mortals’.

The German philosopher Christian Wolff echoed Leibniz in the title of his public lecture Oratio de Sinarum Philosophia Practica, or Discourse on the Practical Philosophy of the Chinese (1721). Wolff argued that Confucius showed that it was possible to have a system of morality without basing it on either divine revelation or natural religion. Because it proposed that ethics can be completely separated from belief in God, the lecture caused a scandal among conservative Christians, who had Wolff relieved of his duties and exiled from Prussia. However, his lecture made him a hero of the German Enlightenment, and he immediately obtained a prestigious position elsewhere. In 1730, he delivered a second public lecture, De Rege Philosophante et Philosopho Regnante, or On the Philosopher King and the Ruling Philosopher, which praised the Chinese for consulting ‘philosophers’ such as Confucius and his later follower Mengzi (fourth century BCE) about important matters of state.

Chinese philosophy was also taken very seriously in France. One of the leading reformers at the court of Louis XV was François Quesnay (1694-1774). He praised Chinese governmental institutions and philosophy so lavishly in his work Despotisme de la China (1767) that he became known as ‘the Confucius of Europe’. Quesnay was one of the originators of the concept of laissez-faire economics, and he saw a model for this in the sage-king Shun, who was known for governing by wúwéi (non-interference in natural processes). The connection between the ideology of laissez-faire economics and wúwéi continues to the present day. In his State of the Union address in 1988, the US president Ronald Reagan quoted a line describing wúwéi from the Daodejing, which he interpreted as a warning against government regulation of business. (Well, I didn’t say that every Chinese philosophical idea was a good idea.)

eibniz, Wolff and Quesnay are illustrations of what was once a common view in European philosophy. In fact, as Peter K J Park notes in Africa, Asia, and the History of Philosophy: Racism in the Formation of the Philosophical Canon (2014), the only options taken seriously by most scholars in the 18th century were that philosophy began in India, that philosophy began in Africa, or that both India and Africa gave philosophy to Greece.

So why did things change? As Park convincingly argues, Africa and Asia were excluded from the philosophical canon by the confluence of two interrelated factors. On the one hand, defenders of the philosophy of Immanuel Kant (1724-1804) consciously rewrote the history of philosophy to make it appear that his critical idealism was the culmination toward which all earlier philosophy was groping, more or less successfully.

On the other hand, European intellectuals increasingly accepted and systematised views of white racial superiority that entailed that no non-Caucasian group could develop philosophy. (Even St Augustine, who was born in northern Africa, is typically depicted in European art as a pasty white guy.) So the exclusion of non-European philosophy from the canon was a decision, not something that people have always believed, and it was a decision based not on a reasoned argument, but rather on polemical considerations involving the pro-Kantian faction in European philosophy, as well as views about race that are both scientifically unsound and morally heinous.

Kant himself was notoriously racist. He treated race as a scientific category (which it is not), correlated it with the ability for abstract thought, and – theorising on the destiny of races in lectures to students – arranged them in a hierarchical order:


  1. ‘The race of the whites contains all talents and motives in itself.’
  2. ‘The Hindus … have a strong degree of calm, and all look like philosophers. That notwithstanding, they are much inclined to anger and love. They thus are educable in the highest degree, but only to the arts and not to the sciences. They will never achieve abstract concepts. [Kant ranks the Chinese with East Indians, and claims that they are] static … for their history books show that they do not know more now than they have long known.’
  3. ‘The race of Negroes … [is] full of affect and passion, very lively, chatty and vain. It can be educated, but only to the education of servants, ie, they can be trained.’
  4. ‘The [Indigenous] American people are uneducable; for they lack affect and passion. They are not amorous, and so are not fertile. They speak hardly at all, … care for nothing and are lazy.’

Those of us who are specialists on Chinese philosophy are particularly aware of Kant’s disdain for Confucius: ‘Philosophy is not to be found in the whole Orient. … Their teacher Confucius teaches in his writings nothing outside a moral doctrine designed for the princes … and offers examples of former Chinese princes. … But a concept of virtue and morality never entered the heads of the Chinese.’

Kant is easily one of the four or five most influential philosophers in the Western tradition. He asserted that the Chinese, Indians, Africans and the Indigenous peoples of the Americas are congenitally incapable of philosophy. And contemporary Western philosophers take it for granted that there is no Chinese, Indian, African or Native American philosophy. If this is a coincidence, it is a stunning one.



If philosophy starts with Plato’s Republic,
then I guess the inventor of the Socratic method
was not a philosopher


One might argue that, while Kant’s racist premises are indefensible, his conclusion is correct, because the essence of philosophy is to be a part of one specific Western intellectual lineage. This is the position defended by D Kyle Peone in the conservative journal The Weekly Standard. Peone, a postgraduate in philosophy at Emory University in Georgia, argued that, because ‘philosophy’ is a word of Greek origin, it refers only to the tradition that grows out of the ancient Greek thinkers. A similar line of argument was given here in Aeon by Nicholas Tampio, who pronounced that ‘Philosophy originates in Plato’s Republic.’

These are transparently bad arguments (as both Jay Garfield and Amy Olberding have pointed out). For one thing, if the etymology of a term determines which culture ‘owns’ that subject, then there is no algebra in Europe, since we got that term from Arabic. In addition, if philosophy starts with Plato’s Republic, then I guess the inventor of the Socratic method was not a philosopher. My colleagues who teach and write books on pre-Socratic ‘philosophers’ such as Heraclitus and Parmenides are also out of jobs.

Peone and Tampio are part of a long line of thinkers who have tried to simply define non-European philosophy out of existence. In What is Philosophy (1956), Martin Heidegger claimed that:

The often-heard expression ‘Western-European philosophy’ is, in truth, a tautology. Why? Because philosophy is Greek in its nature; … the nature of philosophy is of such a kind that it first appropriated the Greek world, and only it, in order to unfold
.

Similarly, on a visit to China in 2001, Jacques Derrida stunned his hosts (who teach in Chinese philosophy departments) by announcing that ‘China does not have any philosophy, only thought.’ In response to the obvious shock of his audience, Derrida insisted that ‘Philosophy is related to some sort of particular history, some languages, and some ancient Greek invention. … It is something of European form.’

The statements of Derrida and Heidegger might have the appearance of complimenting non-Western philosophy for avoiding the entanglements of Western metaphysics. In actuality, their comments are as condescending as talk of ‘noble savages’, who are untainted by the corrupting influences of the West, but are for that very reason barred from participation in higher culture.

It is not only philosophers in the so-called Continental tradition who are dismissive of philosophy outside the Anglo-European canon. The British philosopher G E Moore (1873-1958) was one of the founders of analytic philosophy, the tradition that has become dominant in the English-speaking world. When the Indian philosopher Surendra Nath Dasgupta read a paper on the epistemology of Vedanta to a session of the Aristotelian Society in London, Moore’s only comment was: ‘I have nothing to offer myself. But I am sure that whatever Dasgupta says is absolutely false.’ The audience of British philosophers in attendance roared with laughter at the devastating ‘argument’ Moore had levelled against this Indian philosophical system.

It might be tempting to dismiss this as just a joke between colleagues, but we have to keep in mind that Indian philosophy was already marginalised in Moore’s era. His joke would have had an exclusionary effect similar to sexist jokes made in professional contexts today.

The case of Eugene Sun Park illustrates how Moore’s intellectual descendants are equally narrow-minded. When Sun Park was a student in a mainstream philosophy department in the US Midwest, he tried to encourage a more diverse approach to philosophy by advocating the hiring of faculty who specialise in Chinese philosophy or one other of the less commonly taught philosophies. He reports that he found himself ‘repeatedly confounded by ignorance and, at times, thinly veiled racism’. One member of the faculty basically told him: ‘This is the intellectual tradition we work in. Take it or leave it.’ When Sun Park tried to at least refer to non-Western philosophy in his own dissertation, he was advised to ‘transfer to the Religious Studies Department or some other department where “ethnic studies” would be more welcome’.

Sun Park eventually dropped out of his doctoral programme, and is now a filmmaker. How many other students – particularly students who might have brought greater diversity to the profession – have been turned off from the beginning, or have dropped out along the way, because philosophy seems like nothing but a temple to the achievement of white males?

Those who say that Chinese philosophy is irrational
do not bother to read it, and simply dismiss it in ignorance

Some philosophers will grant (grudgingly) that there might be philosophy in China or India, for example, but then assume that it somehow isn’t as good as European philosophy. Most contemporary Western intellectuals gingerly dance around this issue. The late Justice Antonin Scalia was an exception, saying in print what many people actually think, or whisper to like-minded colleagues over drinks at the club. He referred to the thought of Confucius as ‘the mystical aphorisms of the fortune cookie’.

To anyone who asserts that there is no philosophy outside the Anglo-European tradition, or who admits that there is philosophy outside the West but thinks that it simply isn’t any good, I ask the following. Why does he think that the Mohist state-of-nature argument to justify government authority is not philosophy? What does he make of Mengzi’s reductio ad absurdum against the claim that human nature is reducible to desires for food and sex? Why does he dismiss Zhuangzi’s version of the infinite regress argument for skepticism? What is his opinion of Han Feizi’s argument that political institutions must be designed so that they do not depend upon the virtue of political agents? What does he think of Zongmi’s argument that reality must fundamentally be mental, because it is inexplicable how consciousness could arise from matter that is non-conscious? Why does he regard the Platonic dialogues as philosophical, yet dismiss Fazang’s dialogue in which he argues for, and responds to, objections against the claim that individuals are defined by their relationships to others? What is his opinion of Wang Yangming’s arguments for the claim that it is impossible to know what is good yet fail to do what is good? Does he find convincing Dai Zhen’s effort to produce a naturalistic foundation for ethics in the universalisability of our natural motivations? What does he make of Mou Zongsan’s critique of Kant, or Liu Shaoqi’s argument that Marxism is incoherent unless supplemented with a theory of individual ethical transformation? Does he prefer the formulation of the argument for the equality of women given in the Vimalakirti Sutra, or the one given by the Neo-Confucian Li Zhi, or the one given by the Marxist Li Dazhao? Of course, the answer to each question is that those who suggest that Chinese philosophy is irrational have never heard of any of these arguments because they do not bother to read Chinese philosophy and simply dismiss it in ignorance.

The sad reality is that comments such as those by Kant, Heidegger, Derrida, Moore, Scalia and the professors that Sun Park encountered are manifestations of what Edward W Said labelled ‘Orientalism’ in his eponymous book of 1979: the view that everything from Egypt to Japan is essentially the same, and is the polar opposite of the West: ‘The Oriental is irrational, depraved (fallen), childlike, “different”; thus the European is rational, virtuous, mature, “normal”.’ Those under the influence of Orientalism do not need to really read Chinese (or other non-European) texts or take their arguments seriously, because they come pre-interpreted: ‘“Orientals” for all practical purposes were a Platonic essence, which any Orientalist (or ruler of Orientals) might examine, understand, and expose.’ And this essence guarantees that what Chinese, Indian, Middle Eastern or other non-European thinkers have to say is, at best, quaint, at worst – fatuous.

Readers of this essay might be disappointed that my examples (both positive and negative) have focused on Chinese philosophy. This is simply because Chinese philosophy is the area in non-Western philosophy that I know best. To advocate that we teach more philosophy outside the Anglo-European mainstream is not to suggest the unrealistic goal that each of us should be equally adept at lecturing on all of them. However, we should not forget that Chinese philosophy is only one of a substantial number of less commonly taught philosophies (LCTP) that are largely ignored by US philosophy departments, including African, Indian, and Indigenous philosophies. Although I am far from an expert in any of these traditions, I do know enough about them to recognise that they have much to offer as philosophy.

Just read An Essay on African Philosophical Thought: The Akan Conceptual Scheme (1987) by Kwame Gyekye, or Philosophy and an African Culture (1980) by Kwasi Wiredu, or Philosophy in Classical India (2001) by Jonardon Ganeri, or Buddhism as Philosophy (2007) by Mark Siderits, or Aztec Philosophy (2014) by James Maffie, or the writings of Kyle Powys Whyte at Michigan State University on Indigenous environmentalism. Many forms of philosophy that are deeply influenced by the Greco-Roman tradition (and hence particularly easy to incorporate into the curriculum) are also ignored in mainstream departments, including African-American, Christian, feminist, Islamic, Jewish, Latin American, and LGBTQ philosophies. Adding coverage of any of them to the curriculum would be a positive step toward greater diversity.

I am not saying that mainstream Anglo-European philosophy is bad and all other philosophy is good. There are people who succumb to this sort of cultural Manicheanism, but I am not one of them. My goal is to broaden philosophy by tearing down barriers, not to narrow it by building new ones. To do this is to be more faithful to the ideals that motivate the best philosophy in every culture. When the ancient philosopher Diogenes was asked what city he came from, he replied: ‘I am a citizen of the world.’ Contemporary philosophy in the West has lost this perspective. In order to grow intellectually, to attract an increasingly diverse student body, and to remain culturally relevant, philosophy must recover its original cosmopolitan ideal.

This article is an edited excerpt from Bryan W Van Norden’s Taking Back Philosophy: A Multicultural Manifesto (2017), with a foreword by Jay L Garfield, published by Columbia University Press.


SOURCE
 

Attachments

  • chinese philosophy-final.jpg
    chinese philosophy-final.jpg
    303 KB · Views: 77
attachment.php


What is Fecal Microbiota Transplant?

Fecal Microbiota Transplant (FMT) is a procedure in which fecal matter, or stool, is collected from a tested donor, mixed with a saline or other solution, strained, and placed in a patient, by colonoscopy, endoscopy, sigmoidoscopy, or enema.

The purpose of fecal transplant is to replace good bacteria that has been killed or suppressed, usually by the use of antibiotics, causing bad bacteria, specifically Clostridium difficile, or C. diff., to over-populate the colon. This infection causes a condition called C. diff. colitis, resulting in often debilitating, sometimes fatal diarrhea.




Healing Clostridium difficile Infections (CDI)

C. diff. is a very serious infection, and the incidence is on the rise throughout the world. The CDC reports that approximately 347,000 people in the U.S. alone were diagnosed with this infection in 2012. Of those, at least 14,000 died. Some estimates place that number in the 30,000 to 50,000 range, if the U.S. used the same cause of death reporting methods as most of the rest of the world.

Fecal transplant has also had promising results with many other digestive or auto-immune diseases, including Irritable Bowel Syndrome, Crohn’s Disease, and Ulcerative Colitis. It has also been used around the world to treat other conditions, although more research in other areas is needed.



History of FMT


Fecal transplant was first documented in 4th century China, known as “yellow soup”.
It has been used for over 100 years in veterinary medicine, and has been used regularly for decades in many countries as the first line of defense, or treatment of choice, for C. diff. It is customary in many areas of the world for a newborn infant to receive a tiny amount of the mother’s stool by mouth, thought to provide immediate population of good bacteria in the baby’s colon, thereby jump-starting the baby’s immune system.

Fecal transplant has been used in the U.S., sporadically since the 1950′s, without much regulation. It has gained popularity in the U.S. in the past few years, although experts estimate that total number of treatments to date in the U.S. remains below 500 patients.

In late spring of 2013, the FDA announced it was classifying fecal matter as both an Investigational New Drug (IND) and a Biologic, and that only physicians currently in possession of an approved IND application would be allowed to continue performing fecal transplant.

This resulted in less than 20 physicians in the U.S. being allowed to perform fecal transplant. There was a groundswell of opposition from physicians and patients, and on June 17th, 2013, the FDA reversed their position, and announced that qualified physicians could continue to perform FMT for recurrent C. diff. only, with signed consents from patients and tested donor stool.

This has resulted in more and more physicians beginning to perform fecal transplant, but there are still only limited numbers serving the large population needing the treatment. There are also many patients who do not have a donor to assist them.
And there are many patients who have never even heard of this treatment, even though the success rate for treatment of recurrent C. diff. is estimated to be well over 90%.

In all documentation, dating back to 4th century China, there has never been a single , serious side effect reported from fecal transplant.



Highly Effective Treatment

Fecal Transplant is a low-cost, low-risk, highly effective treatment. It is not currently covered by most insurance companies, as it is still classified as an experimental treatment.

The Fecal Transplant Foundation was created to raise awareness of this life saving treatment, to help patients and physicians, and to accomplish the many goals in our Mission Statement.








Source
 

Attachments

  • fecal transplant frontier-final.jpg
    fecal transplant frontier-final.jpg
    268.3 KB · Views: 67
attachment.php


Everyone knows what it feels like to have consciousness: it’s that self-evident sense of personal awareness, which gives us a feeling of ownership and control over the thoughts, emotions and experiences that we have every day.

Most experts think that consciousness can be divided into two parts: the experience of consciousness (or personal awareness), and the contents of consciousness, which include things such as thoughts, beliefs, sensations, perceptions, intentions, memories and emotions.

It’s easy to assume that these contents of consciousness are somehow chosen, caused or controlled by our personal awareness – after all, thoughts don’t exist until until we think them. But in a new research paper in Frontiers of Psychology, we argue that this is a mistake.

We suggest that our personal awareness does not create, cause or choose our beliefs, feelings or perceptions. Instead, the contents of consciousness are generated “behind the scenes” by fast, efficient, non-conscious systems in our brains. All this happens without any interference from our personal awareness, which sits passively in the passenger seat while these processes occur.

Put simply, we don’t consciously choose our thoughts or our feelings – we become aware of them.


Not just a suggestion

If this sounds strange, consider how effortlessly we regain consciousness each morning after losing it the night before; how thoughts and emotions – welcome or otherwise – arrive already formed in our minds; how the colours and shapes we see are constructed into meaningful objects or memorable faces without any effort or input from our conscious mind.

Consider that all the neuropsychological processes responsible for moving your body or using words to form sentences take place without involving your personal awareness. We believe that the processes responsible for generating the contents of consciousness do the same.

Our thinking has been influenced by research into neuropsychological and neuropsychiatric disorders, as well as more recent cognitive neuroscience studies using hypnosis. The studies using hypnosis show that a person’s mood, thoughts and perceptions can be profoundly altered by suggestion.

In such studies, participants go through a hypnosis induction procedure, to help them to enter a mentally focused and absorbed state. Then, suggestions are made to change their perceptions and experiences.

For example, in one study, researchers recorded the brain activity of participants when they raised their arm intentionally, when it was lifted by a pulley, and when it moved in response to a hypnotic suggestion that it was being lifted by a pulley.

Similar areas of the brain were active during the involuntary and the suggested “alien” movement, while brain activity for the intentional action was different. So, hypnotic suggestion can be seen as a means of communicating an idea or belief that, when accepted, has the power to alter a person’s perceptions or behaviour.

The personal narrative

All this may leave one wondering where our thoughts, emotions and perceptions actually come from. We argue that the contents of consciousness are a subset of the experiences, emotions, thoughts and beliefs that are generated by non-conscious processes within our brains.

This subset takes the form of a personal narrative, which is constantly being updated. The personal narrative exists in parallel with our personal awareness, but the latter has no influence over the former.

The personal narrative is important because it provides information to be stored in your autobiographical memory (the story you tell yourself, about yourself), and gives human beings a way of communicating the things we have perceived and experienced to others.

This, in turn, allows us to generate survival strategies; for example, by learning to predict other people’s behaviour. Interpersonal skills like this underpin the development of social and cultural structures, which have promoted the survival of human kind for millennia.

So, we argue that it is the ability to communicate the contents of one’s personal narrative –– and not personal awareness – that gives humans their unique evolutionary advantage.

What’s the point?

If the experience of consciousness does not confer any particular advantage, it’s not clear what it’s purpose is. But as a passive accompaniment to non-conscious processes, we don’t think that the phenomena of personal awareness has a purpose, in much the same way that rainbows do not. Rainbows simply result from the reflection, refraction and dispersion of sunlight through water droplets – none of which serves any particular purpose.

Our conclusions also raise questions about the notions of free will and personal responsibility. If our personal awareness does not control the contents of the personal narrative which reflects our thoughts, feelings, emotions, actions and decisions, then perhaps we should not be held responsible for them.

In response to this, we argue that free will and personal responsibility are notions that have been constructed by society. As such, they are built into the way we see and understand ourselves as individuals, and as a species. Because of this, they are represented within the non-conscious processes that create our personal narratives, and in the way we communicate those narratives to others.

Just because consciousness has been placed in the passenger seat, does not mean we need to dispense with important everyday notions such as free will and personal responsibility. In fact, they are embedded in the workings of our non-conscious brain systems. They have a powerful purpose in society and have a deep impact on the way we understand ourselves.



SOURCE
 

Attachments

  • mind-consciousness=final.jpg
    mind-consciousness=final.jpg
    747.4 KB · Views: 67
attachment.php




Ignorance is entropy or disorder. Entropy is nonexistence. Life is order. Order postulates knowledge. For life, knowledge is its only tool to circumvent, prevent entropy and nonexistence. What does it mean, then, when an individual, a cult, or society embraces willful ignorance and spurn the search for knowledge?

John O. Campbell, an independent scholar from British Columbia, has published several works [1-4] describing his version of universal Darwinism. This framework proposes that Darwinian selection explains what exists not just biologically but in many other realms as well, from the quantum to the cultural to the cosmological. My interest in John’s framework developed after I began researching ‘cosmological natural selection with intelligence’ [5-7], and seeing how concepts like entropy, selection, and adaptation seem fundamental in both biology and physics. My research led me to the Evo Devo Universe research community, to which John also belongs, and I soon learned of his remarkable book Darwin Does Physics [2]. This book presents some of the most intellectually exhilarating ideas I’ve come across in years. I was grateful to have the opportunity to interview John for This View of Life, and help communicate these ideas to a wider audience.

Michael Price: Thanks very much for speaking with us, John. Let’s start with a few words about universal Darwinism in general. Darwinism is normally used to explain what exists biologically, but universal Darwinism suggests that it may explain what exists in many non-biological domains as well. What are some of these other domains?

John Campbell: Thank you, Michael, for inviting me to this interview and also for introducing me to This View of Life, whose byline is "anything and everything from an evolutionary perspective." This byline hits the nail on the head, and is a perspective that is becoming increasingly relevant as a growing number of researchers develop Darwinian/evolutionary theories across the entire scope of scientific subject matter. This phenomenon might be termed universal Darwinism.

Surprisingly well-developed Darwinian theories have been proposed to explain the creation and evolution of complexity not just in genetics and biology (including evolutionary psychology), but in cosmology [8], quantum physics [9], neuroscience [10], and practically every branch of the social sciences.

Of course, this ubiquity raises the question: why is Darwinian process observed so widely in nature? As your question implies, this may have to do with the nature of existence itself. We might consider that existence tends to be rare, complex and fragile due to the pervasive dissipative action of the second law of thermodynamics. The second law is one of the most fundamental laws of physics, and states that the total entropy – that is, disorder – of an isolated system can only increase over time. Darwinian processes may be viewed as nature’s method of countering this universal tendency towards disorder and non-existence.

MP: Pretty intriguing so far! But how can we achieve an integrated understanding of how Darwinian principles operate across such diverse domains? Are there key concepts that are fundamental to all Darwinian systems, no matter what domain they’re operating in?

JC: I believe the most important key concept is knowledge. The diverse fields of study mentioned above all involve knowledge repositories such as genomes, quantum wave functions, mental models, and cultural models.

To understand what knowledge really means, we need to understand its inverse, ignorance. The concept of ignorance is central to information theory, developed originally by Claude Shannon [11]. He defined information in terms of the "surprise" experienced by a model which has predicted some outcome, and then received evidence about the actual outcome that contradicts its prediction somewhat. In this sense, information is a measure of how ignorant the model was. This model, by the way, could be any system that attempts to predict any outcome in the world. For example, it could be a scientific model, which proposes a certain hypothesis; or the genome of a species, which attempts to predict the best ways to survive and reproduce; or a quantum wave function, which probablistically predicts the future states of quantum systems.

Knowledge involves the reduction of ignorance. It results in the model making better predictions of actual outcomes and thus reducing its likelihood of being surprised in the future. The mathematics of this process is Bayesian inference, which describes the updating of models when they receive information—that is, surprising new evidenceso that going forward the model is optimized to make the most accurate predictions possible, "learning" from the evidence it has encountered.

As can be seen in the quick sketch above, the systems enabling knowledge accumulation—which I refer to as "inferential systems"—are not simple; they involve complex concepts including probabilistic models, information, and Bayesian inference. Over evolutionary time, inferential systems accumulate the knowledge required to achieve extended forms of existence. This mechanism of knowledge accumulation may be understood as forming the core of all Darwinian processes.

MP: It’s fascinating how knowledge-related concepts seem so central to all Darwinian processes. What’s even more fascinating is the fact that ignorance is actually equivalent, mathematically, to entropy! But we’ll get to that in a minute. First, let’s make sure everyone understands exactly why knowledge is so central to Darwinian evolution. The process of accumulating knowledge is really the same process as adapting to an environment, right? The most familiar example would be biological adaptation: as a species’ genome adapts to some feature of its environment, it’s as if the genome is acquiring knowledge about the best strategies for survival and reproduction in that environment. That seems clear enough, but could you provide similar examples from Darwinian domains that are less familiar than biology?

JC: We tend to think of "evidence-based knowledge" as a human construct, but in my view this is an anthropomorphic trap, because knowledge is in fact universal.

Cultural knowledge is, nonetheless, one of nature’s great achievements, and is the form of knowledge most familiar to us. We can interpret all cultural processes, for example agriculture, as the accumulation of evidence-based knowledge; those variations within crop species which conform to human preferences have continuously been selected. Many such cultural processes do not use evidence in an optimal manner and are best described mathematically as ‘approximate’ rather than ‘exact’ Bayesian processes. A more exact cultural practice is science: as the great E.T. Jaynes [12] wrote, Bayesian inference is the very logic of science, and science is an obvious evidence-based process of knowledge accumulation and learning.

A recent revolution of unification in neuroscience, that is little known outside the field itself, is called the Bayesian brain formulation [13]. The leader of this revolution, Karl Friston of University College London, has recently been rated the most influential neuroscientist of modern times. In his interpretation, the brain is essentially a Bayesian computer which selects among a variety of mental hypotheses on the basis of sensory evidence. What we see visually, for example, is due to a process of inference: mental processes generate a number of plausible candidates, and sensory evidence is used to select the hypothesis which best fits the evidence.

As you mention, genomes may be interpreted as knowledge repositories which have been accumulated over evolutionary time through the process of natural selection. This too is evidence-based knowledge, with the evidence in the form of what can and cannot exist (that is, survive and reproduce).

Quantum physics has long been plagued with the anthropomorphic notion that human observation is involved with quantum phenomena. Recently a new understanding, developed by Wojciech Zurek (Los Alamos National Laboratory) and others [9], makes it clear that quantum systems react in the same way to other quantum systems in their environment, whether or not any human is observing. When one quantum system exchanges information with another, its knowledge repository, in the form of its wave function, is updated in a quantum jump and is able to better predict outcomes of future interactions. This may readily be interpreted as an accumulation of evidence-based knowledge.

Within cosmology, a central puzzle concerns the nature of what must be a knowledge repository of the most fundamental kind, one that encodes the laws of physics and the exact values of approximately 30 fundamental parameters. This knowledge repository is extremely fine-tuned to produce complexity; almost any minuscule random variation in any parameter would result in a universe without the complexity of even atoms. Lee Smolin (Perimeter Institute for Theoretical Physics) has proposed the theory of cosmological natural selection [8] to explain this puzzle in cosmology. In this theory, a black hole in a parent universe generates a child universe, which inherits physical laws and parameters from the parent. Over many generations, a typical universe evolves the complex features required to produce black holes, features such as atoms, complex chemistry, and stars. Once this level of complexity is achieved, other Darwinian subroutines may produce additional levels of complexity, such as biological and cultural complexity.

The knowledge repository involved with each level of existence describes an autopoietic (self-creating and maintaining) strategy for existence which evolves over time. Essentially, these strategies involve complex interactions, designed to exploit loopholes in the second law of thermodynamics and achieve some form of existence.

MP: Excellent, thank you for all those great examples. It’s remarkable how the knowledge concept seems central to domains—from the quantum to the biocultural to the cosmological—which, superficially, seem so disparate. Now let’s return to that point about entropy I noted above, because this is another fundamental concept that integrates seemingly diverse domains. It turns out that in information theory, the mathematical formula Shannon created to define ignorance (the inverse of knowledge) is exactly the same as that used in thermodynamics to define entropy (disorder). I was blown away to learn this fact (from your great book, Darwin Does Physics), because it provides such an elegant link between physics, biology, and any other domain in which Darwinian processes may operate. My view of this link is that in any such domain, entropy can be thought of as equivalent to ignorance, and adaptation as equivalent to knowledge, and entropy/ignorance gets transformed into adaptation/knowledge via Darwinian selection, utilizing Bayesian inference. Is that an accurate description of your universal Darwinism framework?

JC: When Shannon [11] was working on his revolutionary theory of information in the late 1940s, he found that an odd mathematical expression held a central place in his developing theory. He shared this with the great mathematician John von Neumann and asked von Neumann what he should name it. Von Neumann jokingly suggested he should name it "entropy," as it had the same mathematical form as the familiar thermodynamic entropy.

As is so often the case in science, what seemed initially like an amusing coincidence actually pointed to a deep hidden connection. E.T. Jaynes was able to demonstrate [14] that both cases of entropy refer to ignorance within an inferential process. In the case of Shannon’s entropy, it is the ignorance of the model used by the receiver of information before the information is actually received. In the case of thermodynamic entropy, it is the ignorance of the scientific model used to describe a thermodynamic microstate—a model that, for example, specifies the exact location and momentum of every gas molecule in a volume. If the model knows any macrovariables such as temperature or pressure, it has some (but not very much) statistical knowledge of the exact microstate. Entropy is the number of bits of information required to move the model from its current state of uncertainty or ignorance to a state of certainty, where it would exactly describe the complete microstate. It is a measure of a model’s current state of ignorance.

Mathematically, every probabilistic model has the property of entropy. Realistically, entropy tends to increase, as specified by the second law. We might understand this as a consequence of the world changing in a somewhat random manner and therefore the ignorance of any model of the world will only increase if it does not track these changes with new evidence. Within universal Darwinism, we see nature’s counter to the second lawthe constant search for evidence with which to maintain the accuracy of its models that specify strategies for existence. The only mathematically correct means of updating models with evidence is Bayesian inference. This testing is achieved through construction of adaptations coded by the internal model, and the evidence concerning the relative ability of those adaptations to contribute to existence. This might be analogous to a scientist constructing an experimental apparatus to test which of the many hypothetical states of a phenomena are actually adapted to exist.

MP: Brilliant. Could you provide other illustrations, from outside biology, of how Darwinian selection acts as an anti-entropic process? Let’s take quantum Darwinism, for example, because of all universal Darwinism applications, it’s probably the most counterintuitive. In quantum Darwinism, how does selection act to reduce entropy and generate adaptation?

JC: As you noted, quantum Darwinism is very difficult to wrap one’s head around. Everyone knows that quantum physics is weird, and the core of this weirdness is that the quantum domain is filled with many varieties of unfamiliar states. The vast number of "superposed" states of a quantum system never become part of what we perceive as normal, "classical" reality. Why is that? Why does the vast majority of information describing quantum states never make it into our classical reality?

Wojciech Zurek [9] explains this puzzle with his theory of quantum Darwinism. Essentially, he demonstrates that very little quantum information or knowledge can survive the transfer to its environment. Only classical information can survive the transfer. It is therefore selected, in a Darwinian sense, and this selected information composes classical reality. This is similar to the notion that only special states of genetic knowledge can survive in the environment, and that these genetic sequences are selected and come to form biological reality.

Now, finally, I can attempt an answer to your question! The quantum wave function is the knowledge repository of the quantum domain and, like all probabilistic models, it has the property of entropy. The wave function encodes the physical attributes of the quantum system, much as genes encode phenotypes. The wave function contains exact knowledge of the physical states and therefore is minimally ignorant—that is, it has low entropy. Indeed, Zurek named one of his earliest methods for predicting which quantum states could exist in classical reality the "predictability sieve," which is a mathematical method of identifying states with the lowest entropy. Thus, we can view the quantum states that are able to achieve existence in classical reality as low entropy states, specifically adapted for that purpose.

MP: Wow, well, if Darwinian selection can be said to have "accomplishments," then its most impressive accomplishment may be how it can seemingly transform the bizarre quantum world into what we perceive as good old classical reality. What’s most fascinating to me about this is it implies that classical events can be thought of as quantum-level adaptations, and therefore that the fabric of reality itself is something like an incomprehensively vast network of such adaptations. Do you think that’s a fair way to characterize the world that Zurek is proposing?

JC: Yes, that is a good characterization. In biology it is uncontroversial to view phenotypes as bundles of adaptations and this notion generalizes well to universal Darwinism. The problem that nature is grappling with is existence, and in that struggle for existence those entities which do exist must be specifically adapted for that purpose. The adaptive history of any entity is recorded in its knowledge repository and this historical knowledge is used to instantiate new iterations or generations of adaptive systems, each with small variations. Those variations that are sufficiently adapted to achieve existence serve to update the knowledge repository with the secrets of their success. Within universal Darwinism we may well consider the generalized phenotype to be an "adaptive system" or "adaptive bundle."

MP: John, thanks again for speaking with us. I’ve really appreciated having the opportunity to learn from your responses myself, and also to share your pioneering universal Darwinism framework with a wider audience.


References available on:

SOURCE
 

Attachments

  • Universal Darwinism-FINAL.png
    Universal Darwinism-FINAL.png
    471 KB · Views: 60



Elon Musk once described the sensational advances in artificial intelligence as “summoning the demon.” Boy, how the demon can play Go.

The AI company DeepMind announced last week it had developed an algorithm capable of excelling at the ancient Chinese board game. The big deal is that this algorithm, called AlphaGo Zero, is completely self-taught. It was armed only with the rules of the game — and zero human input.

AlphaGo, its predecessor, was trained on data from thousands of games played by human competitors. The two algorithms went to war, and AGZ triumphed 100-nil. In other words — put this up in neon lights — disregarding human intellect allowed AGZ to become a supreme exponent of its art.

While DeepMind is the outfit most likely to feed Mr Musk’s fevered nightmares, machine autonomy is on the rise elsewhere. In January, researchers at Carnegie-Mellon University unveiled an algorithm capable of beating the best human poker players. The machine, called Libratus, racked up nearly $2m in chips against top-ranked professionals of Heads-Up No-Limit Texas Hold ‘em, a challenging version of the card game. Flesh-and-blood rivals described being outbluffed by a machine as “demoralising”. Again, Libratus improved its game by detecting and patching its own weaknesses, rather than borrowing from human intuition.


Once fast, deep algorithms are unshackled from the slow, shallow disappointment of human intellect, they can begin crunching problems that our own lackluster species has not confronted

AGZ and Libratus are one-trick ponies but technologists dream of machines with broader capabilities. DeepMind, for example, declares it wants to create “algorithms that achieve superhuman performance in the most challenging domains with no human input”. Once fast, deep algorithms are unshackled from the slow, shallow disappointment of human intellect, they can begin crunching problems that our own lacklustre species has not confronted. Rather than emulating human intelligence the top tech thinkers toil daily to render it unnecessary.

For that reason, we might one day look back on AGZ and Libratus as baby steps towards the Singularity, the much-debated point at which AI becomes super-intelligent, able to control its own destiny without recourse to human intervention. The most dystopian scenario is that AI becomes an existential risk.

Suppose that super-intelligent machines calculate, in pursuit of their programmed goals, that the best course of action is to build even cleverer successors. A runaway iteration takes hold, racing exponentially into fantastical realms of calculation.

One day, these goal-driven paragons of productivity might also calculate, without menace, that they can best fulfil their tasks by taking humans out of the picture. As others have quipped, the most coldly logical way to beat cancer is to eliminate the organisms that develop it. Ditto for global hunger and climate change.

These are riffs on the paper-clip thought experiment dreamt up by philosopher Nick Bostrom, now at the Future of Humanity Institute at Oxford university. If a hyper-intelligent machine, devoid of moral agency, was programmed solely to maximise the production of paper clips, it might end up commandeering all available atoms to this end. There is surely no sadder demise for humanity than being turned into office supplies. Professor Bostrom’s warning articulates the capability caution principle, a well-subscribed idea in robotics that we cannot necessarily assume the upper capabilities of AI.

It is of course pragmatic to worry about job displacement: many of us, this writer included, are paid for carrying out a limited range of tasks. We are ripe for automation. But only fools contemplate the more distant future without anxiety — when machines may out-think us in ways we do not have the capacity to imagine.



SOURCE

An AI with consciousness but lacks empathy, go figure....

BTW how come a person like you who believes in crackpot science and conspiracy theory all of a sudden made our lord and savior Musky look like a clown? Is your job on the motoring industry? :lol::rofl:
 
An AI with consciousness but lacks empathy, go figure....

BTW how come a person like you who believes in crackpot science and conspiracy theory all of a sudden made our lord and savior Musky look like a clown? Is your job on the motoring industry? :lol::rofl:

What's the matter, new year spirit gettin' way over the head...? :lol:

Either that or you're workin' the keyboard still fully inebriated hehe.

First off, all ur lines are off the mark this time so perhaps we'll just leave it for the holiday excesses. :lol:

Crackpot science and conspiracy theory subscriber? Goodness, u might have mistaken me for someone else around here. If not u can enlighten me or anyone else by pointing to any post of mine that smacks of any hint of crackpot science and conspiracy theory. :p :lol:
 
Last edited:
What's the matter, new year spirit gettin' way over the head...? :lol:

Either that or you're workin' the keyboard still fully inebriated hehe.

First off, all ur lines are off the mark this time so perhaps we'll just leave it for the holiday excesses. :lol:

Crackpot science and conspiracy theory subscriber? Goodness, u might have mistaken me for someone else around here. If not u can enlighten me or anyone else by pointing to any post of mine that smacks of any hint of crackpot science and conspiracy theory. :p :lol:

You know I was portraying/parody what a typical Elon Musk fanboy would be like.

Too lazy to back read but it is something to do with you agreeing to a thread about super wealthy families controlling the world and also I asked you before about the thunderbolt project then you said they are "OK" so why did you made Elon's opinion about evil AI?
 
You know I was portraying/parody what a typical Elon Musk fanboy would be like.

Too lazy to back read but it is something to do with you agreeing to a thread about super wealthy families controlling the world and also I asked you before about the thunderbolt project then you said they are "OK" so why did you made Elon's opinion about evil AI?

Well if you were attempting parody you don't come across successfully. Time perhaps to brush up on the literary style...?

I'm surprised you label the fact that 1 to 10 percent of global population wield the control of the resources of the world economy as a conspiracy theory. The informatics and all the pertinent data are all over the place from respectable institutions and other secondhand disseminators. All you need is crosscheck rather than cite an excuse just so you could hold on to a pet opinion.

Thunderbolt project? What's that, the one in tandem with the Revive Zeus project. Let's substantiate our claims otherwise what separates us from common charlattans...

And about conspiracy theories, I won't be quick to dismiss them like I used to in the past. Why? Because while it's only too easy to brand them as crackpots, there are many occasions when they have had the last laugh after they were proven correct afterall. And again you need to help yourself with that because I or anybody else might not feel obligated to help judgmental people out there. It's easy to be a skeptic, much harder still to dig around for truth and not come out as another common stereotype.
 
Last edited:
Back
Top Bottom