Symbianize Forum

Most of our features and services are available only to members, so we encourage you to login or register a new account. Registration is free, fast and simple. You only need to provide a valid email. Being a member you'll gain access to all member forums and features, post a message to ask question or provide answer, and share or find resources related to mobile phones, tablets, computers, game consoles, and multimedia.

All that and more, so what are you waiting for, click the register button and join us now! Ito ang website na ginawa ng pinoy para sa pinoy!

Science. Technology. Civilization. Culture.

Stormer0628

Symbianize Angel
Advanced Member
Messages
2,145
Reaction score
5
Points
28
Intro


Funny how human science and technology is running away with a planet whose climate is getting nastier and collapsing everyday as far comfort and habitability are concerned. So which one will win out in the end?

Perhaps science and technology will save the day. Perhaps not. Who knows? Human ingenuity gave us agriculture to feed millions. Gave us the Industrial Revolution to support billions. They also drastically changed the climate makeup of the planet. It's like yin or yang really.

So why not cast eyes—even if wearied now—to the forefronts of human activities. Let's see if the developments in the latest science and technology would enable us to divine where we humans are heading.

Everything except the topics covered in my other S & T specialist threads would be covered here.

And what a fitting way to start this thread: just now, scientists have released findings that bring us back to the dawn of the most important time in human history: when humanity fully sought to expand cooperation thru agriculture to give us the myriad forms of civilizations we have now. The cause? A cataclysmic cosmological event akin to the one that brought about the extinction of dinosaurs. This one wiped out the mammoths and countless humans and other forms of life at the time. The witness? A monument. Where? In Gobekli Tepe of all places....






Ancient stone carvings confirm that a devastating comet hit Earth 13,000 years ago killing thousands and triggering the rise of civilizations


  • Scientists were analyzing symbols carved on pillars at Gobekli Tepe in Turkey
  • Using memorial carvings they pinpointed a comet impact to around 11,000BC
  • The comet triggered a mini ice age that lasted 1,000 years
  • This ice age forced humans to develop farming techniques to grow their crops

Ancient symbols carved into stone at an archaeological site in Turkey tell the story of a devastating comet impact that triggered a mini ice age more than 13,000 years ago, scientists believe.

Evidence from the carvings, made on a pillar known as the Vulture Stone, suggests that a swarm of comet fragments hit the Earth in around 11000 BC.

One image of a headless man is thought to symbolize human disaster and extensive loss of life.

The devastating event, which wiped out creatures such as woolly mammoths, also helped spark the rise of civilization.



attachment.php

Ancient stone carvings confirm that a swarm of comets hit Earth 13,000 years ago sparking the rise of civilizations and wiping out the woolly mammoth. Pictured are the stone carvings used in the team's research, found on pillar 43 or 'the Vulture Stone' at Gobekli Tepe in Turkey​


Scientists have speculated for decades that a comet could have caused the sharp drop in temperature during a period known as the Younger Dryas.

The Younger Dryas is seen as a crucial period in humanity's history as it coincides with the beginnings of agriculture and the first Neolithic civilizations.

Scientists were analyzing the mysterious symbols carved onto stone pillars at Gobekli Tepe in southern Turkey to find out if they could be linked to constellations.

Engineers from the University of Edinburgh studied animal carvings made on a pillar – known as the vulture stone – at the site.

By interpreting the animals as astronomical symbols, and using computer software to match their positions to patterns of stars, researchers dated the event to 10,950BC.

It probably resulted from the break-up of a giant comet in the inner solar system.

This is around the time the Younger Dryas period began according to ice core data from Greenland, which pinpoints the event to 10,890BC.

Before the comet strike, large fields of barley and wheat had allowed roaming hunters in the Middle East to set up permanent base camps.

But the ice-cold conditions created by the impact forced these hunters to band together and find new ways to grow crops.

They developed watering and selective breeding to help their crops last against the harsh climate, forming modern farming practices.

The carvings appear to have remained important to the people of Gobekli Tepe for millennia, the Edinburgh researchers said.



attachment.php

By interpreting the animals as astronomical symbols, and using computer software to match their positions to patterns of stars, researchers dated the event to 10,950BC. This image shows the position of the sun and stars on the summer solstice of 10,950BC​


This suggests that the event and cold climate that followed likely had a serious impact.

The team suggest the images were intended as a record of the cataclysmic event.

A further carving showing a headless man may indicate human disaster and extensive loss of life, they said.

Furthermore, symbolism on the pillars indicates that the long-term changes in Earth's rotational axis was recorded at this time using an early form of writing.

The symbolism suggests that Gȍbekli Tepe was an observatory for meteors and comets.


attachment.php

The find supports a theory that Earth is likely to experience periods when comet strikes are more likely,
owing to Earth's orbit intersecting orbiting rings of comet fragments in space (stock image)




SOURCE
 

Attachments

  • 3F74AE1600000578-4432554-image-m-55_1492780921697.jpg
    3F74AE1600000578-4432554-image-m-55_1492780921697.jpg
    100.9 KB · Views: 2,766
  • 3F74AE2500000578-4432554-Position_of_the_sun_and_stars_on_the_summer_solstice_of_10_950_B-a-54_1.jpg
    3F74AE2500000578-4432554-Position_of_the_sun_and_stars_on_the_summer_solstice_of_10_950_B-a-54_1.jpg
    44.2 KB · Views: 2,756
  • 3F74ADE900000578-4432554-image-m-46_1492780604159.jpg
    3F74ADE900000578-4432554-image-m-46_1492780604159.jpg
    59.8 KB · Views: 2,756
First Evidence of Higher Consciousness Found

attachment.php


Reaching a higher state of consciousness is a concept you're more likely to hear a spiritualist spout than a scientist, but now neuroscientists at the University of Sussex claim to have found the first evidence of just such a state. From wakefulness down to a deep coma, consciousness is on a sliding scale measured by the diversity of brain signals, and the researchers found that when under the influence of psychedelic drugs, that diversity jumps to new heights above the everyday baseline.

The research builds on data gathered about a year ago by a team at Imperial College London, which dosed up volunteers with psychedelics, including LSD, psilocybin and ketamine, then scanned their brains with magnetoencephalographic (MEG) techniques to examine the effects. This new study set out to determine how a psychedelic state would compare to other levels of wakefulness and unconsciousness, according to a scale of brain signal diversity measured by monitoring the magnetic fields produced by the brain.

When a person is asleep, their brain signals are far less diverse than when they're awake and aware, and past research has noted that it varies by what stage of the sleep cycle they're in. Being put under different types of anaesthesia induce even lower scores, and it bottoms out for those in a vegetative state. But this is the first time signal diversity has been seen to be higher than the normal readings of an alert, conscious mind.

"This finding shows that the brain-on-psychedelics behaves very differently from normal," says Anil Seth, corresponding author of the study. "During the psychedelic state, the electrical activity of the brain is less predictable and less 'integrated' than during normal conscious wakefulness – as measured by 'global signal diversity.' Since this measure has already shown its value as a measure of 'conscious level', we can say that the psychedelic state appears as a higher 'level' of consciousness than normal – but only with respect to this specific mathematical measure."

Interestingly, the more intense a trip the participant reported, the more diverse their brain signals appeared to be. That finding could help scientists better understand the connection between the level of consciousness and what specifically someone is conscious of.

"We found correlations between the intensity of the psychedelic experience, as reported by volunteers, and changes in signal diversity," says Seth. "This suggests that our measure has close links not only to global brain changes induced by the drugs, but to those aspects of brain dynamics that underlie specific aspects of conscious experience."

But as appealing as a higher state of consciousness might sound, the researchers (and us here at New Atlas) aren't trying to encourage drug use. The team is careful to point out that "higher" doesn't necessarily mean "better," and the key take-away from the study is that a psychedelic experience is a distinct conscious state.

"Rigorous research into psychedelics is gaining increasing attention, not least because of the therapeutic potential that these drugs may have when used sensibly and under medical supervision," says Robin Cahart-Harris, another author of the study. "The present study's findings help us understand what happens in people's brains when they experience an expansion of their consciousness under psychedelics. People often say they experience insight under these drugs – and when this occurs in a therapeutic context, it can predict positive outcomes. The present findings may help us understand how this can happen."

In the future, the researchers will turn their attention to trying to figure out the biological mechanics behind specific parts of the experience, such as hallucinations.

The research was published in the journal Scientific Reports.

SOURCE


- - - Updated - - -







attachment.php



A remarkable terrible-headed lizard fossil found in China shows an embryo inside the mother — a clear evidence that some reptiles were giving birth to live babies.

A quarter billion years ago, what is now Southwest China was covered by a shallow sea. Within that sea, some pretty big and nasty-looking creatures roamed the days, including the Dinocephalosaurus. Dinocephalosaurus had a hugely elongated neck and liked to swallow its prey whole — this being a significant detail. Paleontologists have found a fossil of such a sea monster, which is exciting enough, but within it, they found something else: another fossil. At first, they thought it was the dinosaur’s lunch, but the embryo faced forward, as opposed to any potential prey which would be swallowed head-first. Making this distinction was not easy.

“I was not sure if the embryonic specimen was the last lunch of the mother or its unborn baby,” said Jun Liu, a paleontologist at the Hefei University of Technology in China, and leader of the study. “Upon closer inspection and searching the literature, I realized that something unusual had been discovered.” Furthermore, the somewhat curved position was typical for am embryo. “The curled posture of the embryonic skeleton is also typical for vertebrate embryos,” Liu said.

Professor Jonathan Aitchison, who was also involved in the study, said this provides the first evidence of live birth in a class of animals thought to only lay eggs.

“Live birth is well known in mammals, where the mother has a placenta to nourish the developing embryo,” Professor Aitchison said. “Live birth is also very common among lizards and snakes, where the babies sometimes ‘hatch’ inside their mother and emerge without a shelled egg.”

Dinocephalosaurus, which means “terrible-headed lizard”, was not technically a dinosaur — it preceded dinosaurs by a few million years. However, it does belong to a group called archosauromorpha, a clade that includes the animals which later became crocodiles, alligators, birds and of course, dinosaurs. Even by modern standards, finding a member of this clade giving birth to live babies would be unusual, but for a member this old it is indeed stunning.

Being an aquatic reptile, it would be expected of Dinocephalosaurus to lay its eggs on dry land, like turtles do. But because of its very particular shape and its long neck, it was completely unsuited for trips out of the water. When your neck is a third of your total body length, you’d much rather stay underwater, which is arguably why the creatures developed this birthing mechanism.

Professor Chris Organ from Montana State University, who also worked on the study, said evolutionary analysis showed that this instance of live birth was also associated with genetic sex determination.

“Some reptiles today, such as crocodiles, determine the sex of their offspring by the temperature inside the nest,” he said. “We identified that Dinocephalosaurus, a distant ancestor of crocodiles, determined the sex of its babies genetically, like mammals and birds. This new specimen from China rewrites our understanding of the evolution of reproductive systems.”

But not everyone is convinced. Michael Caldwell, an expert in extinct marine reptiles and chair of the biological sciences department at the University of Alberta says that Choristodera, a group of semi-aquatic reptiles, gave birth to live young, too. The only question is whether they too can be placed under the archosauromorpha, which according to Caldwell, are “a giant grab bag” of seemingly disparate animals. But in the grand scheme of things, whether it’s Dinocephalosaurus or Choristodera that can lay a claim to fame is not relevant. What is important is understanding how environmental pressures pushed these creatures towards this unusual approach. Also, perhaps instead of wondering why these creatures did get pregnant, we should ask ourselves why their descendants don’t.

Journal Reference: Jun Liu, Chris L. Organ, Michael J. Benton, Matthew C. Brandley, Jonathan C. Aitchison. Live birth in an archosauromorph reptile. Nature Communications, 2017; 8: 14445 DOI: 10.1038/ncomms14445

SOURCE


- - - Updated - - -





attachment.php


How did the dinosaur become the dinosaur? Somewhere along the line, the ancestor of dinosaurs diverged from the ancestor of crocodiles, a momentous split in the evolution of vertebrates that ultimately set the stage for the age of dinos. But the details of that split remain mysterious, thanks to a dearth of fossils of early dinosaur relatives. Enter the newly identified 247-million- to 242-million-year-old Teleocrater rhadinus, a close relative of dinosaurs that also happened to walk on all fours and share some key features with the ancestors of crocodiles. These shared features, the authors say, suggest that it’s time to rethink what we thought we knew about dinosaurs’ earliest ancestors.

“We’ve been waiting a long time to find fossils like this that fit in this part of the family tree,” says Randall Irmis, a paleontologist at the Natural History Museum of Utah in Salt Lake City, who was not involved in the work. “This has pretty big implications for how we understand the early evolution of dinosaurs.”

Some 251 million years ago, at the end of the Permian period, a mass extinction wiped out most of life on Earth. In its wake arose a group of egg-laying reptile precursors called archosaurs, the common ancestors of dinosaurs, flying reptiles known as pterosaurs, and crocodiles. At some point during the next period, the Triassic, pterosaurs and dinosaurs split off from the crocodile lineage.

Those two different lineages, avian versus crocodilian, have long been identified by their types of ankle joints. Dinosaurs and pterosaurs all have a version of a hinged, birdlike ankle, rather than the crocodilelike ankle with ball-and-socket joint.

But exactly what early dinosaurs and their closest relatives looked like has been something of a mystery, because few fossils exist from the dawn of the dinosaurs. And many of the fossils that do exist, collected perhaps decades to a century ago, languish unidentified in museum drawers.

Indeed, Teleocrater isn’t a completely new discovery. A specimen was first unearthed in what is now Tanzania in the 1930s and sat in London’s Natural History Museum until 1956, when Ph.D. candidate Alan Charig (later a paleontologist at the museum) dubbed it T. rhadinus (referring to the shape of the animal’s hip and its slender body). Charig, who died in 1997 but is included as an author on the new paper, speculated that it was some sort of early dinosaur relative. But the fossil was in pieces, just bits of vertebrae and pelvis and limb, and difficult to place on the family tree.

Then in 2015, Sterling Nesbitt, a paleontologist at Virginia Polytechnic Institute and State University in Blacksburg, and a team of researchers headed back to southern Tanzania to take another look at the middle Triassic rocks where Teleocrater was first discovered. This time, the rocks—estimated to be about 247 million to 242 million years old—yielded several individuals of the same species. With that new bounty, the researchers were able to catalog many more of the creature’s features—enough to place it on the vertebrate family tree.

Teleocrater, Nesbitt and his co-authors report online today in Nature, belongs at the very base of the avian lineage that later gave rise to dinosaurs. It has a characteristic muscle scar on the upper leg bone that is found only in the avian lineage of birds and dinosaurs and is missing in crocodiles and their relatives. But Teleocrater also had a crocodilelike ankle, with a ball-and-socket joint. That suggests that the crocodile ankle came first, and the bird ankle evolved later.

That’s key, because the ankle joint has been used for decades as an indicator of avian versus crocodilian lineage, so Teleocrater must be close to the split between them. And in several respects, Nesbitt says, “Teleocrater looks more like the relatives of the crocodiles than the relatives of dinosaurs.” The carnivorous animal, which was roughly the size of a small lion, walked on all fours, its forelimbs and hindlimbs are similar in proportion, and the limbs themselves are pretty short relative to the length of the body.

Seemingly small details like these can produce ripples throughout paleontology collections, because they can help researchers properly classify fossils that had seemed to be walking contradictions. “These fossils exist in museums around the world, but until you find a keystone—something that helps you understand the full anatomy [of a species]—you won’t understand where these animals go on the tree of life,” Nesbitt says.

After identifying Teleocrater as an ancestor along the avian lineage, the authors could group it with several other difficult-to-place animals, including Dongusuchus and Spondylosoma, naming a new group of long-necked, carnivorous quadrupeds dubbed Aphanosauria (hidden or obscure lizards, in Greek). Aphanosaurs, they suggest, are the earliest group in the avian stem lineage to diverge from the crocodile lineage. And that suggests that these bird and dinosaur ancestors were far more diverse and widely distributed across the globe during the middle Triassic than once thought.

The find may also alter what paleontologists hunt for in the field, as well as how they understand existing collections, says Max Langer, a paleontologist at the University of São Paulo in Rio Claro, Brazil. “Now that we know the diagnostic features of this group of archosaurs, everybody working on middle Triassic rocks will be looking for something similar.”

SOURCE



- - - Updated - - -






attachment.php



The most comprehensive study on the bones of Homo floresiensis, a species of tiny human discovered on the Indonesian island of Flores in 2003, has found that they most likely evolved from an ancestor in Africa and not from Homo erectus as has been widely believed.

The study by The Australian National University (ANU) found Homo floresiensis, dubbed "the hobbits" due to their small stature, were most likely a sister species of Homo habilis—one of the earliest known species of human found in Africa 1.75 million years ago.

Data from the study concluded there was no evidence for the popular theory that Homo floresiensis evolved from the much larger Homo erectus, the only other early hominid known to have lived in the region with fossils discovered on the Indonesian mainland of Java.

Study leader Dr Debbie Argue of the ANU School of Archaeology & Anthropology, said the results should help put to rest a debate that has been hotly contested ever since Homo floresiensis was discovered.

"The analyses show that on the family tree, Homo floresiensis was likely a sister species of Homo habilis. It means these two shared a common ancestor," Dr Argue said.

"It's possible that Homo floresiensis evolved in Africa and migrated, or the common ancestor moved from Africa then evolved into Homo floresiensis somewhere."

Homo floresiensis is known to have lived on Flores until as recently as 54,000 years ago.

The study was the result of an Australian Research Council grant in 2010 that enabled the researchers to explore where the newly-found species fits in the human evolutionary tree.

Where previous research had focused mostly on the skull and lower jaw, this study used 133 data points ranging across the skull, jaws, teeth, arms, legs and shoulders.

Dr Argue said none of the data supported the theory that Homo floresiensis evolved from Homo erectus.

"We looked at whether Homo floresiensis could be descended from Homo erectus," she said.

"We found that if you try and link them on the family tree, you get a very unsupported result. All the tests say it doesn't fit—it's just not a viable theory."

Dr Argue said this was supported by the fact that in many features, such as the structure of the jaw, Homo floresiensis was more primitive than Homo erectus.

"Logically, it would be hard to understand how you could have that regression—why would the jaw of Homo erectus evolve back to the primitive condition we see in Homo floresiensis?"

Dr Argue said the analyses could also support the theory that Homo floresiensis could have branched off earlier in the timeline, more than 1.75 million years ago.

"If this was the case Homo floresiensis would have evolved before the earliest Homo habilis, which would make it very archaic indeed," she said.

Professor Mike Lee of Flinders University and the South Australian Museum, used statistical modeling to analyse the data.

"When we did the analysis there was really clear support for the relationship with Homo habilis. Homo floresiensis occupied a very primitive position on the human evolutionary tree," Professor Lee said.

"We can be 99 per cent sure it's not related to Homo erectus and nearly 100 per cent chance it isn't a malformed Homo sapiens," Professor Lee said.



SOURCE
 

Attachments

  • higher-consciousness-final.jpg
    higher-consciousness-final.jpg
    400 KB · Views: 2,750
  • dino-live babies-final.jpg
    dino-live babies-final.jpg
    265.4 KB · Views: 2,726
  • crocodile-dinosaur-discovery-final.jpg
    crocodile-dinosaur-discovery-final.jpg
    565.8 KB · Views: 2,720
  • indoHobbitsorigin-final.jpg
    indoHobbitsorigin-final.jpg
    334.3 KB · Views: 2,678
Last edited:
Re: First Evidence of Higher Consciousness Found

attachment.php



The six-foot mollusc lives in stinking mud and in a symbiotic relationship with chemosynthetic bacteria that feed on hydrogen sulfide gas.

A worm-like creature that grows to almost two metres long, lives in stinking mud and doesn’t eat a thing is shedding new light on evolution and the nature of co-dependence.

Described in Proceedings of the National Academy of Sciences, the giant shipworm (Kuphus polythalamia) has been found alive for the first time, after a scientist saw it in a wildlife documentary aired on Philippines television and realised it was a species unknown to science.

In one sense, Kuphus polythalamia has been known for centuries, because the characteristic long empty shells it leaves behind have often been collected by fisherfolk and travellers.

However, no one had ever seen a live specimen, much less discovered where it lived or what it ate.

The answer to the first question, it turns out, is remote lagoons filled with rotting wood and deep, sucking mud that emits large amounts of hydrogen sulfide – often called, for good reason, rotten-egg gas.

And the answer to the second question seems to be nothing at all – and it is at this point that a mollusc almost as tall as a basketball player gets even more interesting.

A team of researchers led by Daniel Distel of Northeastern University in Massachusetts, US, discovered that the shipworms harbor in their gills colonies of bacteria that survive by digesting the hydrogen sulfide – a type of consumption known as chemosynthesis.

Chemosynthetic bacteria are not uncommon. They colonise many environments where sunlight is absent, ranging from deep-sea hydrothermal vents to animal corpses and rotting plant matter. While some derive energy from rotteneegg gas, others process ammonia, molecular hydrogen, ferrous iron or sulfur.

In processing the hydrogen sulfide, the bacteria in Kuphus polythalamia gills produce organic carbon that provides its nourishment. That this has been a very long-term symbiosis is evidenced by the fact that many of the shipworm’s internal digestive organs have atrophied.

The discovery of the living giants provides welcome support for Distel, who has been studying the shipworm family, Teredinidae, for decades.

All other types of shipworm are comparatively small, and live exclusively in rotting wood. Some 20 years ago Distel suggested that other species would need to strike up intimate relationships with chemosynths in order to colonise less narrow environments.

In Kuphus polythalamia he appears to have found proof, and perhaps the first of many examples.

“We are also interested to see if similar transitions can be found for other animals that live in unique habitats around the world,” he says.




SOURCE
 

Attachments

  • philippine shipworm-final.jpg
    philippine shipworm-final.jpg
    270.1 KB · Views: 2,572
Re: First Evidence of Higher Consciousness Found

attachment.php


Just as ancient Greeks fantasized about soaring flight, today's imaginations dream of melding minds and machines as a remedy to the pesky problem of human mortality.

Can the mind connect directly with artificial intelligence, robots and other minds through brain-computer interface (BCI) technologies to transcend our human limitations?

Over the last 50 years, researchers at university labs and companies around the world have made impressive progress toward achieving such a vision.

Recently, successful entrepreneurs such as Elon Musk (Neuralink) and Bryan Johnson (Kernel) have announced new startups that seek to enhance human capabilities through brain-computer interfacing.

How close are we really to successfully connecting our brains to our technologies? And what might the implications be when our minds are plugged in?





Eb Fetz, a researcher here at the Centre for Sensorimotor Neural Engineering (CSNE), is one of the earliest pioneers to connect machines to minds.

In 1969, before there were even personal computers, he showed that monkeys can amplify their brain signals to control a needle that moved on a dial.

Much of the recent work on BCIs aims to improve the quality of life of people who are paralyzed or have severe motor disabilities.

You may have seen some recent accomplishments in the news: University of Pittsburgh researchers use signals recorded inside the brain to control a robotic arm.

Stanford researchers can extract the movement intentions of paralyzed patients from their brain signals, allowing them to use a tablet wirelessly.

Similarly, some limited virtual sensations can be sent back to the brain, by delivering electrical current inside the brain or to the brain surface.

What about our main senses of sight and sound? Very early versions of bionic eyes for people with severe vision impairment have been deployed commercially, and improved versions are undergoing human trials right now.

Cochlear implants, on the other hand, have become one of the most successful and most prevalent bionic implants – over 300,000 users around the world use the implants to hear.

The most sophisticated BCIs are 'bi-directional' BCIs (BBCIs), which can both record from and stimulate the nervous system. At our center, we're exploring BBCIs as a radical new rehabilitation tool for stroke and spinal cord injury.

We've shown that a BBCI can be used to strengthen connections between two brain regions or between the brain and the spinal cord, and reroute information around an area of injury to reanimate a paralyzed limb.

With all these successes to date, you might think a brain-computer interface is poised to be the next must-have consumer gadget.

Still early days
But a careful look at some of the current BCI demonstrations reveals we still have a way to go: When BCIs produce movements, they are much slower, less precise and less complex than what able-bodied people do easily every day with their limbs.

Bionic eyes offer very low-resolution vision; cochlear implants can electronically carry limited speech information, but distort the experience of music.

And to make all these technologies work, electrodes have to be surgically implanted – a prospect most people today wouldn't consider.

Not all BCIs, however, are invasive. Noninvasive BCIs that don't require surgery do exist; they are typically based on electrical (EEG) recordings from the scalp and have been used to demonstrate control of cursors, wheelchairs, robotic arms, drones, humanoid robots, and even brain-to-brain communication.

But all these demos have been in the laboratory – where the rooms are quiet, the test subjects aren't distracted, the technical setup is long and methodical, and experiments last only long enough to show that a concept is possible.

It's proved very difficult to make these systems fast and robust enough to be of practical use in the real world.




Even with implanted electrodes, another problem with trying to read minds arises from how our brains are structured. We know that each neuron and their thousands of connected neighbors form an unimaginably large and ever-changing network.

What might this mean for neuroengineers?

Imagine you're trying to understand a conversation between a big group of friends about a complicated subject, but you're allowed to listen to only a single person.

You might be able to figure out the very rough topic of what the conversation is about, but definitely not all the details and nuances of the entire discussion.

Because even our best implants only allow us to listen to a few small patches of the brain at a time, we can do some impressive things, but we're nowhere near understanding the full conversation.

There is also what we think of as a language barrier. Neurons communicate with each other through a complex interaction of electrical signals and chemical reactions.

This native electro-chemical language can be interpreted with electrical circuits, but it's not easy.

Similarly, when we speak back to the brain using electrical stimulation, it is with a heavy electrical "accent." This makes it difficult for neurons to understand what the stimulation is trying to convey in the midst of all the other ongoing neural activity.

Finally, there is the problem of damage. Brain tissue is soft and flexible, while most of our electrically conductive materials – the wires that connect to brain tissue – tend to be very rigid.

This means that implanted electronics often cause scarring and immune reactions that mean the implants to lose effectiveness over time. Flexible biocompatible fibers and arrays may eventually help in this regard.

Coadapting, cohabiting
Despite all these challenges, we're optimistic about our bionic future. BCIs don't have to be perfect. The brain is amazingly adaptive and capable of learning to use BCIs in a manner similar to how we learn new skills like driving a car or using a touchscreen interface.

Similarly, the brain can learn to interpret new types of sensory information even when it's delivered noninvasively using, for example, magnetic pulses.




[SUB]Ultimately, we believe a 'co-adaptive' bidirectional BCI, where the electronics learns with the brain and talks back to the brain constantly during the process of learning, may prove to be a necessary step to build the neural bridge. Building such co-adaptive bidirectional BCIs is the goal of our center.

We are similarly excited about recent successes in targeted treatment of diseases like diabetes using 'electroceuticals' – experimental small implants that treat a disease without drugs by communicating commands directly to internal organs.

And researchers have discovered new ways of overcoming the electrical-to-biochemical language barrier. Injectible 'neural lace', for example, may prove to be a promising way to gradually allow neurons to grow alongside implanted electrodes rather than rejecting them.

Flexible nanowire-based probes, flexible neuron scaffolds and glassy carbon interfaces may also allow biological and technological computers to happily coexist in our bodies in the future.

From assistive to augmentative
Elon Musk's new startup Neuralink has the stated ultimate goal of enhancing humans with BCIs to give our brains a leg up in the ongoing arms race between human and artificial intelligence.

He hopes that with the ability to connect to our technologies, the human brain could enhance its own capabilities – possibly allowing us to avoid a potential dystopian future where AI has far surpassed natural human capabilities.

Such a vision certainly may seem far-off or fanciful, but we shouldn't dismiss an idea on strangeness alone. After all, self-driving cars were relegated to the realm of science fiction even a decade and a half ago – and now share our roads.

In a closer future, as brain-computer interfaces move beyond restoring function in disabled people to augmenting able-bodied individuals beyond their human capacity, we need to be acutely aware of a host of issues related to consent, privacy, identity, agency and inequality.

At our center, a team of philosophers, clinicians and engineers is working actively to address these ethical, moral and social justice issues and offer neuroethical guidelines before the field progresses too far ahead.

Connecting our brains directly to technology may ultimately be a natural progression of how humans have augmented themselves with technology over the ages, from using wheels to overcome our bipedal limitations to making notations on clay tablets and paper to augment our memories.

Much like the computers, smartphones and virtual reality headsets of today, augmentative BCIs, when they finally arrive on the consumer market, will be exhilarating, frustrating, risky and, at the same time, full of promise.

This article was originally published by The Conversation. Read the original article.

SOURCE
[/SUB]
 

Attachments

  • sensor-cap-mind-meld-brain-machine-final.jpg
    sensor-cap-mind-meld-brain-machine-final.jpg
    297.9 KB · Views: 2,521
Last edited:
Re: First Evidence of Higher Consciousness Found

attachment.php


Scientists have found more new evidence that Parkinson's could start in the gut before spreading to the brain, observing lower rates of the disease in patients who had undergone a procedure called a truncal vagotomy.

The operation removes sections of the vagus nerve - which links the digestive tract with the brain - and over the course of a five-year study, patients who had this link completely removed were 40 percent less likely to develop Parkinson's than those who hadn't.

According to a team led by Bojing Liu from the Karolinska Instituet in Sweden, that's a significant difference, and it backs up earlier work linking the development of the brain disease to something happening inside our bellies.

If we can understand more about how this link operates, we might be better able to stop it.

"These results provide preliminary evidence that Parkinson's disease may start in the gut," says Liu.

"Other evidence for this hypothesis is that people with Parkinson's disease often have gastrointestinal problems such as constipation, that can start decades before they develop the disease."

The vagus nerve helps control various unconscious processes like heart rate and digestion, and resecting parts of it in a vagotomy is usually done to remove an ulcer if the stomach is producing a dangerous level of acid.

For this study, the researchers looked at 40 years of data from Swedish national registers, to compare 9,430 people who had a vagotomy against 377,200 people from the general population who hadn't.

The likelihood of people in these two groups to develop Parkinson's was statistically similar at first - until the researchers looked at the type of vagotomy that had been carried out on the smaller group.

In total, 19 people (just 0.78 percent of the sample) developed Parkinson's more than five years after a truncal (complete) vagotomy, compared to 60 people (1.08 percent) who had a selective vagotomy.

Compare that to the 3,932 (1.15 percent) of people who had no surgery and developed Parkinson's after being monitored for at least five years, and it seems clear that the vagus nerve is playing some kind of role here.

So what's going on here? One hypothesis the scientists put forward is that gut proteins start folding in the wrong way, and that genetic 'mistake' gets carried up to the brain somehow, with the mistake being spread from cell to cell.

Parkinson's develops as neurons in the brain get killed off, leading to tremors, stiffness, and difficulty with movement - but scientists aren't sure how it's caused in the first place. The new study gives them a helpful tip about where to look.

The latest research isn't alone in its conclusions. Last year, tests on mice showed links between certain mixes of gut bacteria and a greater likelihood of developing Parkinson's.

What's more, earlier this year a study in the US identified differences between the gut bacteria of those with Parkinson's compared with those who didn't have the condition.

All of this is useful for scientists looking to prevent Parkinson's, because if we know where it starts, we can block off the source.

But we shouldn't get ahead of ourselves - as the researchers behind the new study point out, Parkinson's is complex condition, and they weren't able to include controls for all potential factors, including caffeine intake and smoking.

It's also worth noting that Parkinson's is classed as a syndrome: a collection of different but related symptoms that may have multiple causes.

"Much more research is needed to test this theory and to help us understand the role this may play in the development of Parkinson's," says Lui.

The research has been published in Neurology.

SOURCE
 

Attachments

  • parkinsons-disease-FINAL.jpg
    parkinsons-disease-FINAL.jpg
    489.5 KB · Views: 2,438
Re: First Evidence of Higher Consciousness Found

attachment.php


A new study has shown that people with chronic fatigue syndrome have abnormal levels of specific gut bacteria—providing even more evidence that the condition isn't "just in a person's head."

For decades, millions of people have reported experiencing symptoms now associated with a condition called chronic fatigue syndrome—a debilitating disease that causes brain fog, severe pain, and exhaustion so extreme patients can't go about their daily lives, and sometimes can't even get out of bed. But a physical cause has been elusive, leaving many feeling that their condition isn't being taken seriously.

It was only in 2015 that the US Institute of Medicine detailed a comprehensive way to diagnose chronic fatigue syndrome/myalgic encephalomyelitis (ME/CFS), and earlier this year, scientists linked the condition to faulty cell receptors in immune cells for the first time—which explains why the side effects can be so varied and hard to pin down.

But there are still no effective treatments for the disease, and no cure—some commonly prescribed treatments for the condition have been cognitive behavioral therapy and exercise, neither of which have any evidence to support they work, and could actually be doing more harm than good.

Now, new research has shown that patients with ME/CFS have abnormal levels of specific gut bacteria - and those levels change depending on the severity and type of symptoms they have.

"Individuals with ME/CFS have a distinct mix of gut bacteria and related metabolic disturbances that may influence the severity of their disease," said one of the researchers, Dorottya Nagy-Szakal from Columbia University's Mailman School of Public Health.

"By identifying the specific bacteria involved, we are one step closer to more accurate diagnosis and targeted therapies," added lead researcher Ian Lipkin.

The study adds to research from last year, which showed that up to 80 percent of patients with ME/CFS could be accurately diagnosed by looking at their gut bacteria.

And it's also known that up to 90 percent of ME/CFS patients have irritable bowel syndrome (IBS), so the latest research began to untangle the specific gut bacteria changes associated with each condition.

The team followed 50 ME/CFS patients and 50 healthy controls, who had been carefully matched. They tested the number of bacterial species in faecal samples, and looked at the immune molecules in their blood.

They found that seven distinct intestinal bacterial species were strongly associated with ME/CFS, so much so that an elevated presence of all of them could predict a diagnosis.

The strains were:


  • Faecalibacterium
  • Roseburia
  • Dorea
  • Coprococcus
  • Clostridium
  • Ruminococcus
  • Coprobacillus


There were also specific changes seen in the gut bacteria of those who had chronic fatigue syndrome with IBS and those who didn't have IBS.

Interestingly, when the team measured bacterial metabolic pathways—the ways that bacteria break down food and send signals to the brain - there were clear differences between the healthy controls and the ME/CFS group.

There were also measurable differences depending on the severity of a patient's symptoms, which suggests that are different subtypes of ME/CFS that could be identified.

While this study involved only a small sample size, with further verification this could be the first step towards coming up with targeted ways to not only diagnose the debilitating disease, but also treat it.

"Our analysis suggests that we may be able to subtype patients with ME/CFS by analysing their fecal microbiome," said one of the team, Brent L. Williams.

"Subtyping may provide clues to understanding differences in manifestations of disease."

The research has been published in the journal Microbiome.


SOURCE
 

Attachments

  • ChronicFatigueIBS-final.jpg
    ChronicFatigueIBS-final.jpg
    207.1 KB · Views: 2,371
Exercise in a Pill: Evidence

attachment.php


Couch potatoes, rejoice! Recent study showing that sedentary mice boosted with certain chemicals resemble mice that led active lifestyles tell us that it won't be long before couch potatoes will soon have as healthy bodies as their exercising counterparts.

But before the latter could bitterly say, How unfair is that? we of course know that high on the wish list of technological marvels, right behind guilt-free meat and hoverboards, is a way to avoid exercise entirely and still keep yourself healthy.

Amazingly, at least according to a new study published Wednesday in Cell Metabolism, such a thing may not be so far-fetched after all.

Researchers from the California-based Salk Institute used mice to learn how they could coax the body to produce more of a protein called peroxisome proliferator-activated receptor delta, or PPARD. Their earlier work had found that mice genetically altered to constantly pump out PPARD resembled those who regularly exercised: They could run longer, avoid weight gain, and use insulin more effectively.

This time around, the researchers enlisted normal mice raised to be sedentary coach potatoes. They dosed them with a chemical compound known to increase PPARD levels for eight weeks, and found they could get the same gym rat effect, without the need for exercise. The findings were also a bit of a redemption for the team — earlier research of theirs that used a lower dose of the chemical for a shorter time failed to get the same tantalizing results.

Compared to a group of untouched mice, the chemically-boosted mice were able to run on average 70 percent longer before becoming fully exhausted, from 160 minutes to 270 minutes.

“Exercise activates PPARD, but we’re showing that you can do the same thing without mechanical training,” said lead author Weiwei Fan, a research associate at the Institute, in a statement. “It means you can improve endurance to the equivalent level as someone in training, without all of the physical effort.”

More intriguing, from the researchers’ perspective, is how PPARD seems to boost the mice’s endurance. It triggered genes that regulated when they burned their stores of fat, as expected, but it also suppressed other genes that urged their muscles to use glucose, their main source of energy. Presuming these same effects would translate over to people, the researchers noted, that suggests PPARD tells our muscles to avoid using glucose for the sake of keeping our brain fueled. And that then forces our muscles to switch to its secondary fuel of fat, allowing us to run longer.

Another study also published in Cell Metabolism this month found that PPARD’s influence on our fitness could go even deeper.

They found that PPARD seems to help the body maintain its level of mitochondria in skeletal muscle cells, as well as increase mitochondria following exercise. Mitochondria are considered the powerhouses of the cell, since they break down nutrients to provide fuel. Our body produces fewer mitochondria as we age, scientists have found, which could help explain why people become less active as they reach middle age.

Right now, the dream of using a pill to exercise is still just that, and no single study, especially one involving mice, should raise our hopes all too much. But human trials of drugs that increase PPARD are a likely reality in the near future, the Salk researchers said. Their chemical booster could theoretically be used to help people with chronic conditions like obesity and type-2 diabetes better burn fat, for instance.

SOURCE


- - - Updated - - -





attachment.php


Captain Picard: “How do we reason with them, let them know that we are not a threat?”

Guinan: “You don’t. At least I’ve never known anyone who did.”

With this brief, ominous exchange, the heroes of Star Trek: The Next Generation are introduced to one of their most formidable enemies: the Borg, a race of cyborgs whose minds are linked to a collective “hive mind” through sophisticated technology. The collective expands their civilization through a process of mental and physical “assimilation”: They find new intelligent beings, like humans, implant them with Borg technology, and integrate them into the hive mind, erasing their previous identities.

Individual Borg are not conscious in the way humans are, and they have no sense of individuality. The hive mind is a dictator, an unquestioned voice that commands each individual. The Borg nature is split in two, an executive called the collective and a follower called the drone.

For the humans living in the Star Trek universe, the prospect of assimilation is terrifying. When asked why humans resist assimilation, Chief Engineer Geordi La Forge says, “For somebody like me, losing that sense of individuality is almost worse than dying.”

For many humans living in the real world, the fictional Borg are similarly unsettling. But why? What is it exactly about the Borg that irks us so? Could it be that somewhere in the recesses of our minds we sense something unpleasant about our ourselves when we view the Borg? What if they reflect a different kind of human mentality, one that was actually Borg-like?


The internal voices that commanded bicameral humans eventually fell silent, and humanity was forever changed.

An intriguing, albeit highly controversial, idea very much like this was actually proposed by Julian Jaynes, an American psychologist who taught at Princeton University. In his 1976 book, The Origin of Consciousness in the Breakdown of the Bicameral Mind, Jaynes theorizes that human consciousness—by which he means the ability and tendency to think about ourselves as individuals—emerged suddenly, and relatively recently in history, around 3,000 years ago. That would mean that anatomically modern humans were alive for hundreds of thousands of years before becoming conscious.

Jaynes argues that before this recent emergence of consciousness, humanity experienced the world in a manner similar to the Borg. There was not a holistic self with free will, but rather a two-part psyche, or “bicameral mind,” in which one part gave “orders” to a second part that acted on those orders. For bicameral humans, “volition came as a voice that was in the nature of a neurological command, in which the command and the action were not separated, in which to hear was to obey.” Jaynes says these commands were often perceived as coming from gods, and that they live on today as the internally hallucinated voices heard by some schizophrenics.

In this era, humans did not have an internal self that allowed for introspection or reflection. As such, the Borg are an excellent example of Jaynes’ description of the bicameral mind of early humans. The primary difference is that bicameral humans, unlike the Borg, were not technologically linked together in a single collective mind. Without collective thought, bicameral humans would have had trouble solving and managing complex problems. As ancient communities became more literate, urban, and complicated, Jaynes says the bicameral mind ”broke down,” a change that is reflected in the increasing consciousness of literary characters created soon after that time. After the advent of writing, the internal voices that commanded bicameral humans eventually fell silent, and humanity was forever changed.

But what about the Borg? Are they destined to remain nothing more than unconscious automatons for all Star Trek eternity? In a case of art imitating life (if in fact Jaynes’ theory is correct), it turns out it is possible for the Borg to undergo a transformation similar to the one undergone by bicameral humans. In the episode “I, Borg,” a single injured cyborg is captured and disconnected from the collective by the Enterprise crew. With the Borg executive silenced, this drone eventually becomes conscious and develops individuality, a close analog for Jaynes’ theory of the breakdown of the bicameral mind in humans. Before this transition, the isolated drone is unable to function, but afterward, it becomes endowed with a key feature of consciousness: the concept of “I.” In essence, this Borg—now named Hugh—attains human consciousness.

So maybe what we really fear is not the behavior of a fictional enemy, but a dark remnant of our historical selves. If Jaynes is correct, the transformation from internally commanded, unconscious beings to thinking, reflecting people would have to be considered the most significant and far-reaching adaptation in the history of our species. It was a change that gave us that which we are most loath to lose: our individuality.

SOURCE
 

Attachments

  • exercisepill-final.jpg
    exercisepill-final.jpg
    394 KB · Views: 2,301
  • bicameral_borg-final.jpg
    bicameral_borg-final.jpg
    286.6 KB · Views: 2,257
Last edited:
How Humans Sort Facts from Faith

HOW DO HUMANS SORT FACTS FROM FAITH:
Neurologists Have Identified Brain Lesions That Could Be Linked to Religious Fundamentalism


attachment.php


Hunting for God in our grey matter seems to be a popular topic for neurologists, with past studies comparing religious highs with drug-induced ones, linking spiritual experiences with neurotransmitters such as serotonin, and identifying which parts of the brain (if any) could be responsible for a faith in the supernatural.

Now a new study has now found that those with damage to a section of the brain associated with planning become less open to new ideas, explaining why some people are more likely to become extreme in their religious beliefs.

Led by neurologist Jordan Grafman from Northwestern University in Illinois, the study dug into a tragic yet useful pool of data known as the Vietnam Head Injury Study (VHIS).

In the late 1960s during the Vietnam War, Korean War veteran and neurologist William Caveness developed a registry of approximately 2,000 soldiers who had experienced head trauma during the conflict.

The detailed information collected by medical personnel proved to be invaluable - not just to Caveness, but to other researchers such as Grafman.

As a result of the quick medical treatment many of the wounded received, many soldiers survived their injuries and returned home where researchers continued to follow up on their health and wellbeing.

Decades later, Grafman is still making interesting discoveries on the brain's inner workings by studying it when it's damaged.

Last year, he found parts of the brain in the frontal and temporal regions were responsible for downplaying the significance of mystical experiences.

In this latest research, he and his colleagues took the study a step further by examining the records of 119 Vietnam veterans who had suffered a penetrating head injury and comparing them with the records of 30 veterans with no brain injuries as controls.

Tests conducted on the group during a relatively recent phase of the VHIS included a religious fundamentalism scale - a standardised measure that requires participants to respond to statements such as "To lead the best, most meaningful life, one must belong to the one, true religion."

They also involved a measure of the veteran's overall intelligence and their cognitive flexibility by having them sort cards into different categories according to different sets of rules.

Grafman and his team then used computed tomography (CT) scans to map the position and size of the brain lesions left by the veterans' injuries.

The researchers focussed on those veterans with damage to their dorsolateral prefrontal cortex (dlPFC), which was already known to play a cognitive role in spiritual experiences, as well as playing a role in problem solving, planning, and task management.

They identified a relationship between lesions in these areas, the strength of the veterans' religious convictions, and low cognitive flexibility, suggesting that areas such as the dlPFC play a key role in helping us remain open to diverse ideas inspired by religious experiences.

Similar studies on the ventromedial prefrontal cortex (vmPFC) have found damage to this region makes people more susceptible to misleading advertising, leading other researchers to propose an idea called false tagging theory to explain the role those parts of the brain might play in providing us with a generous serving of scepticism.

Not so, say the researchers.

"Our results also challenge the 'false belief tagging' hypothesis," the team writes in their study, claiming their research shows fundamentalism is less about actively reserving doubt and more about openness to new experience.

It's important to note this doesn't imply a belief in the supernatural is caused by brain damage; epistemology - or the act of forming a belief - involves a rich mix of neurological processes that cannot be limited to any single piece of brain tissue.

Rather, the research suggests damage to certain parts of the brain involved in considering new evidence could make it harder for a person to evaluate their existing religious beliefs against other ideas.

There are also the usual caveats to consider in single studies such as these; for example, the subjects were all older, American males who had experienced not just physical trauma, but the psychological trauma of war.

But the research fits in with what we know about the role of the prefrontal regions of the brain in setting the stage for framing religious experiences.

"We need to understand how distinct religious beliefs are from moral, legal, political, and economic beliefs in their representations in the brain, the nature of conversion from one belief system to another, the difference between belief and agency, and the nature of the depth of knowledge that individuals use to access and report their beliefs," Grafman told Eric W. Dolan at Psypost.

Extreme religious ideology is a divisive political issue in today's world, and looks to continue to be so in the future.

Research such as this might go some small way to helping us understand the neurological underpinnings on how our brains are wired to sort facts from faith.

This research was published in Neuropsychologia.

SOURCE
 

Attachments

  • religious_fundamentalism_1024.jpg
    religious_fundamentalism_1024.jpg
    135.3 KB · Views: 2,192
Ear Damage Implicated in Out-Of-Body Experiences

Ear Damage Implicated in Out-Of-Body Experiences

attachment.php




Out-of-body experiences (OBE) were once the domain of spirituality and mysticism, but now science has moved in to investigate. Oddly enough, researchers found that this vivid hallucinatory experience could actually be caused by damage to people's ears.

A new study by Aix-Marseille Université in France, published in the journal Cortex, suggests that people have a “significantly higher occurrence” of OBEs if they suffer from dizziness and problems with the inner ear, known as peripheral vestibular disorders. Additionally, most of the patients only started to have OBE after they began suffering from dizziness for the first time.

People’s experience of OBEs vary, but it usually denotes a feeling of floating outside one's body in a state of lucid dreaming. Many people report having these experiences at the time of “near death experience” or an extreme physical trauma.

One of the patients in this study explains the sensation as feeling "like I'm outside of myself. I feel like I'm not in myself.” Another said it felt like “he was divided into two persons, one who had not changed posture and another new person on his right, looking somewhat outwardly. Then the two somatic individuals approached each other, merged, and the vertigo disappeared.”

Led by neuroscientist Christophe Lopez, the researchers compared 210 patients who suffer from dizziness with 210 aged and gender matched people with no history of dizziness. Out of those who experienced dizziness, 14 percent said they have had out-of-body experiences. Among the healthy participants, only 5 percent reported these experiences. Many of the patients who had both dizziness and a history of OBE had also been diagnosed with depression, anxiety, depersonalization, or migraines.

They believe it could be due to the vestibular system in the inner the ear, which helps control balance and eye movements. Problems with the vestibular system often cause symptoms such as dizziness, whirling sensations, lightheadedness, floating sensations, and difficulty focusing the eyes.

The reasons behind this link are not crystal clear, as researchers didn’t look for a direct causal relationship between OBE and vestibular disorders, they simply found a correlation between the two.

“Altogether, our data indicate that OBE in patients with dizziness may arise from a combination of perceptual incoherence evoked by the vestibular dysfunction with psychological factors (depersonalization-derealization, depression and anxiety) and neurological factors (migraine),” the study concludes.



SOURCE
 

Attachments

  • 42B656CE00000578-4733140-image-a-9_1501098075789.jpg
    42B656CE00000578-4733140-image-a-9_1501098075789.jpg
    41.9 KB · Views: 332
Last edited:
Breakthrough device heals organs with a single touch

Breakthrough device heals organs with a single touch

attachment.php

Researchers at The Ohio State University Wexner Medical Center and Ohio State's College of Engineering have developed a new technology, Tissue Nanotransfection (TNT), that can generate any cell type of interest for treatment within the patient's own body. This technology may be used to repair injured tissue or restore function of aging tissue, including organs, blood vessels and nerve cells.

Results of the regenerative medicine study published in the journal Nature Nanotechnology.

"By using our novel nanochip technology, injured or compromised organs can be replaced. We have shown that skin is a fertile land where we can grow the elements of any organ that is declining," said Dr. Chandan Sen, director of Ohio State's Center for Regenerative Medicine & Cell Based Therapies, who co-led the study with L. James Lee, professor of chemical and biomolecular engineering with Ohio State's College of Engineering in collaboration with Ohio State's Nanoscale Science and Engineering Center.

Researchers studied mice and pigs in these experiments. In the study, researchers were able to reprogram skin cells to become vascular cells in badly injured legs that lacked blood flow. Within one week, active blood vessels appeared in the injured leg, and by the second week, the leg was saved. In lab tests, this technology was also shown to reprogram skin cells in the live body into nerve cells that were injected into brain-injured mice to help them recover from stroke.





"This is difficult to imagine, but it is achievable, successfully working about 98 percent of the time. With this technology, we can convert skin cells into elements of any organ with just one touch. This process only takes less than a second and is non-invasive, and then you're off. The chip does not stay with you, and the reprogramming of the cell starts. Our technology keeps the cells in the body under immune surveillance, so immune suppression is not necessary," said Sen, who also is executive director of Ohio State's Comprehensive Wound Center.

TNT technology has two major components: First is a nanotechnology-based chip designed to deliver cargo to adult cells in the live body. Second is the design of specific biological cargo for cell conversion. This cargo, when delivered using the chip, converts an adult cell from one type to another, said first author Daniel Gallego-Perez, an assistant professor of biomedical engineering and general surgery who also was a postdoctoral researcher in both Sen's and Lee's laboratories.

TNT doesn't require any laboratory-based procedures and may be implemented at the point of care. The procedure is also non-invasive. The cargo is delivered by zapping the device with a small electrical charge that's barely felt by the patient.

"The concept is very simple," Lee said. "As a matter of fact, we were even surprised how it worked so well. In my lab, we have ongoing research trying to understand the mechanism and do even better. So, this is the beginning, more to come."

Researchers plan to start clinical trials next year to test this technology in humans, Sen said.

SOURCE
 

Attachments

  • 15-breakthrough.jpg
    15-breakthrough.jpg
    86.9 KB · Views: 127
Re: Breakthrough device heals organs with a single touch

.......


3,700-Year-Old Babylonian Clay Tablet Explains
How the Old Masters Conquered Astronomy and Civilization



attachment.php

For a long time, Europe and most modern civilizations suffer from a form of cultural conceit: that ancient civilizations couldn't have come up with forms of knowledge too advanced for their ancient brains. Except that ancient Babylon, for one, kept throwing punches in the faces of modern shrews. Just recently, there's a tablet that shows how they have tracked the astronomical movements of Jupiter, long before Galileo and Kepler had any idea about them. And now this ... which now completely overturns the history of mathematics....

A Babylonian clay tablet dating back 3,700 years has been identified as the world's oldest and most accurate trigonometric table, suggesting the Babylonians beat the ancient Greeks to the invention of trigonometry by over 1,000 years.

The tablet, known as Plimpton 322, was discovered in the early 1900s in what is now southern Iraq, but researchers have always been baffled about what its purpose was.

Thanks to a team from the University of New South Wales (UNSW) in Australia, the mystery may have been solved. More than that, the Babylonian method of calculating trigonometric values could have something to teach mathematicians today.

"Our research reveals that Plimpton 322 describes the shapes of right-angle triangles using a novel kind of trigonometry based on ratios, not angles and circles," says one of the researchers, Daniel Mansfield.

"It is a fascinating mathematical work that demonstrates undoubted genius."

Experts established early on that Plimpton 322 showed a list of Pythagorean triples, sets of numbers that fit trigonometry models for calculating the sides of a right-angled triangle. The big debate has been about what those triples were actually for.

Are they just a series of exercises for teaching, for example? Or are they something more profound?

Babylonian mathematics used a base 60 or sexagesimal system (like the minute markers on a clock face), rather than the base 10 or decimal system we use today.

By applying Babylonian mathematical models, the researchers were able to show that the tablet would originally have had 6 columns and 38 rows. They also show how the mathematicians of the time could've used the Babylonian system to come up with the numbers on the tablet.

The researchers suggest that the tablet may well have been used by ancient scribes to make calculations for building palaces, temples, and canals.

But if the new study is right, then the Greek astronomer Hipparchus, who lived about 120 BC, is not the father of trigonometry that he's long been regarded as. Scholars date the tablet to around 1822-1762 BC.

What's more, because of the way the Babylonians did their maths and geometry, it's the most accurate trigonometric table as well as the oldest.

The reason is that a sexagesimal system has more exact fractions than a decimal system, which means less rounding up. Whereas only two numbers can divide 10 with nothing left over – 2 and 5 – a base 60 system has far more.

Cleaner fractions means fewer approximations and more accurate maths, and the researchers suggest we can learn from it today.

"This means it has great relevance for our modern world," says Mansfield. "Babylonian mathematics may have been out of fashion for more than 3,000 years, but it has possible practical applications in surveying, computer graphics and education."

"This is a rare example of the ancient world teaching us something new."

The research has been published in Historia Mathematica.


SOURCE
 

Attachments

  • stone-tablet-thing-1_1024.jpg
    stone-tablet-thing-1_1024.jpg
    155.8 KB · Views: 114
Last edited:
Ecstasy Labelled a 'Breakthrough Therapy' For PTSD by The FDA

Ecstasy Has Just Been Labelled a 'Breakthrough Therapy' For PTSD by The FDA

Says it's been obvious for many years

attachment.php


How will the Philippines and other countries react to previously claimed dangerous party drugs now so drastically designated?

The US Food and Drug Administration (FDA) has determined that 3,4-methylenedioxymethamphetamine (MDMA), also known as ecstasy, is a 'breakthrough therapy' in the treatment of post-traumatic stress disorder (PTSD).

Thanks to this designation, the drug could have a faster path to pharmaceutical approval.

The Multidisciplinary Association for Psychedelic Studies (MAPS) announced the FDA's ruling last week, revealing that they can now move forward on two of their upcoming 'Phase 3' trials.

The goal of these trials is to determine how effectively the drug can be used to treat those suffering from PTSD. The trials will include 200 to 300 participants, and the first trial will begin to accept subjects in 2018.

"For the first time ever, psychedelic-assisted psychotherapy will be evaluated in Phase 3 trials for possible prescription use, with MDMA-assisted psychotherapy for PTSD leading the way," said Rick Doblin, Founder and Executive Director of MAPS.

The trials will be held in the U.S., Canada, and Israel, and MAPS plans to open talks with the European Medicines Agency in the hopes of expanding testing to include Europe. For now, the focus is on securing the funding they require.

According to Science, the organisation is still in the process of raising money for the trials, and thus far, they've only managed to secure US$13 million, about half of their goal.

Since 1986, MAPS has been conducting MDMA trials in the hopes of proving the drug's therapeutic value. Following the 2011 release of a small study in the US, the drug has gained traction as a potential treatment for PTSD.

Since then, scientists have been pushing for additional testing, but ecstasy's stigma as a harmful street drug has hindered progress. The FDA's new designation could change that.

"This is not a big scientific step," David Nutt, a neuropsychopharmacologist at Imperial College London, explained to Science. "It's been obvious for 40 years that these drugs are medicines. But it's a huge step in acceptance."

Previous MAPS trials exploring how well MDMA could treat PTSD have yielded favourable results, contributing to the FDA's aforementioned decision. In the association's Phase 2 trails, 107 people who had PTSD for an average of 17.8 years were treated using MDMA-assisted psychotherapy.

After two months, 61 percent of the participants no longer suffered from PTSD. After a year, that number increased to 68 percent, according to the MAPS press release.

PTSD only needs a small trigger like a sound, smell, or an object to bring a traumatic memory rushing back. Nearly 8 million adults experience PTSD per year, and children can suffer from it as well.

Should MDMA prove to be an effective and safe treatment, it could help millions of people live normal, healthy lives without fear of the next debilitating PTSD episode.

This article was originally published by Futurism

SOURCE
 

Attachments

  • pill-on-tongue-lipstick-shuterstock-ecstasy-ptsd_1024.jpg
    pill-on-tongue-lipstick-shuterstock-ecstasy-ptsd_1024.jpg
    34.9 KB · Views: 102
New Tech Is Giving Humanity Multiple Paths to Immortality

........

New Tech Is Giving Humanity Multiple Paths to Immortality


attachment.php


Both Kurzweil and the 2045 program have predicted the state of machine-human singularity being achieved by 2045, but what are the methods of achieving such an end and what are the consequences of doing so?


The 2045 Movement and Four Routes to Immortality


Herodotus’s Fountain of Youth. Rowling’s Philosopher’s Stone. Barrie’s Neverland. Ovid’s Cumaean Sibyl. The idea of immortality has been ingrained in humanity’s creative consciousness since our humble beginnings. In the present day, eternal youth may soon move out of the realms of myth and into a reality thanks to developing technologies.

The 2045 movement, founded by Russian billionaire Dmitry Itskov in 2011, aims to make humans immortal by transferring their personalities into a carrier superior to the human body. The movement’s ideology is “to create technologies enabling the transfer of a individual’s personality to a more advanced non-biological carrier, and extending life, including to the point of immortality.”

There are four main avenues that the cooperative are walking down in an attempt to achieve human immortality. Each step reflects a chronological step in the project, with each stage representing a further degree of disembodiment.


attachment.php


Avatar A aims to give the human brain control of a remote humanoid robot by using a brain-computer interface (BCI). While this might seem outlandish, we must remember that controlling robots using thought — the most rudimentary aspect of the project — was achieved a decade ago. This technology has been accelerated by recent advancements in the field of prosthetics, which show that the human nervous system is capable of interfacing with prosthetic enhancements.

Avatar B, rather than controlling a body remotely, seeks to implant the brain into the body itself. The process envisioned is to ‘turn off’ the brain, relocate it (metempsychosis), and then transplant it. As the brain, theoretically, would be in a robotic body, this stage creates a consciousness inhabiting a body that could be modified, augmented, or updated.

Avatar C, the next stage of embodiment, envisions a completely robotic body that the brain could be uploaded to. Hypothetically, this would require hardware and software solutions that could allow consciousness to be uploaded, and subsequently inserted, into a totally robotic body — or potentially many bodies. As the brain would become computerized — rather than it remaining as fleshy matter controlling a machine — this would allow the brain itself to be customized and the sentient robot, as a whole, to survive what a human body could not.

2045 gives very little information concerning Avatar D, but the skeleton of the idea — if you’ll pardon the oxymoron — is to create a “hologram-like avatar” or a “substance-independent mind.”

The Upside and Downside to Eternal Youth

Whether or not this technology will be actualized is a moot question: it will eventually come about, perhaps even as soon as 2045 — as Kurzweil has predicted. The more important question is: should or shouldn’t it arise. The possibility of immortality could have profound effects on the individual, as well as society as a whole.

There are both positives and negatives concerning the idea of eternal youth. Immortality could mean that we would no longer suffer the fear of death, we could do more with our lifetimes, and the world’s greatest minds could continue to develop their thoughts. However, there could also be a strain on resources, serious psychological problems associated with extreme age, and stress on societal structures such as marriage and parenthood.

This dilemma is reminiscent of Ovid’s Cumaean Sibyl: she asked for eternal life rather than eternal youth. Apollo, therefore, let her rot — but kept her alive until she deteriorated into being kept in a jar, when she became only a voice. Taking the story’s moral to heart, we must constantly consider what we are wishing to achieve by attaining eternal life: whether this journey is towards a tangible and positive gain or whether we are simply scared of dying. Hopefully it is the former, or else we run the risk of being nothing more than a voice in a jar, an imitation of life.


SOURCE
 

Attachments

  • GettyImages-newtech_immortality.jpg
    GettyImages-newtech_immortality.jpg
    38.2 KB · Views: 94
  • 2045_avatar_project.jpg
    2045_avatar_project.jpg
    43.7 KB · Views: 96
Re: New Tech Is Giving Humanity Multiple Paths to Immortality

Breaking: An Entirely New Type of Quantum Computing Has Been Invented​

"It's amazing no one had thought of it before."​


attachment.php


Australian researchers have designed a new type of qubit - the building block of quantum computers - that they say will finally make it possible to manufacture a true, large-scale quantum computer.

Broadly speaking, there are currently a number of ways to make a quantum computer. Some take up less space, but tend to be incredibly complex. Others are simpler, but if you want it to scale up you're going to need to knock down a few walls.

Some tried and true ways to capture a qubit are to use standard atom-taming technology such as ion traps and optical tweezers that can hold onto particles long enough for their quantum states to be analysed.

Others use circuits made of superconducting materials to detect quantum superpositions within the insanely slippery electrical currents.

The advantage of these kinds of systems is their basis in existing techniques and equipment, making them relatively affordable and easy to put together.

The cost is space – the technology might do for a relatively small number of qubits, but when you're looking at hundreds or thousands of them linked into a computer, the scale quickly becomes unfeasible.

Thanks to coding information in both the nucleus and electron of an atom, the new silicon qubit, which is being called a 'flip-flop qubit', can be controlled by electric signals, instead of magnetic ones. That means it can maintain quantum entanglement across a larger distance than ever before, making it cheaper and easier to build into a scalable computer.



"If they're too close, or too far apart, the 'entanglement' between quantum bits – which is what makes quantum computers so special – doesn't occur," says the researcher who came up with the new qubit, Guilherme Tosi, from the University of New South Wales in Australia.

The flip-flop qubit will sit in the sweet spot between those two extremes, offering true quantum entanglement across a distance of hundreds of nanometres.

In other words, this might be just what we've been waiting for to make silicon-based quantum computers scalable.

To be clear, so far we only have a blueprint of the device - it hasn't been built as yet. But according to team leader, Andrea Morello, the development is as important for the field as the seminal 1998 paper in Nature by Bruce Kane, which kicked off the silicon quantum computing movement.

"Like Kane's paper, this is a theory, a proposal - the qubit has yet to be built," says Morello. "We have some preliminary experimental data that suggests it's entirely feasible, so we're working to fully demonstrate this. But I think this is as visionary as Kane's original paper."

The flip-flop qubit works by coding information on both the electron AND nucleus of a phosphorus atom implanted inside a silicon chip, and connected with a pattern of electrodes. The whole thing is then chilled to near absolute zero and bathed in a magnetic field.

The qubit's value is then determined by combinations of a binary property called spin – if the spin is 'up' for an electron while 'down' for the nucleus, the qubit represents an overall value of 1. Reversed, and it's a 0.

That leaves the superposition of the spin-states to be used in quantum operations.

In flip-flop, researchers are able to control the qubit using an electric field instead of magnetic signals - which gives two advantages. It is easier to integrate with normal electronic circuits and, most importantly, it also means qubits can communicate over larger distances.

"To operate this qubit, you need to pull the electron a little bit away from the nucleus, using the electrodes at the top. By doing so, you also create an electric dipole," says Tosi.

"This is the crucial point," adds Morello. "These electric dipoles interact with each other over fairly large distances, a good fraction of a micron, or 1,000 nanometres."

"This means we can now place the single-atom qubits much further apart than previously thought possible. So there is plenty of space to intersperse the key classical components such as interconnects, control electrodes and readout devices, while retaining the precise atom-like nature of the quantum bit."

"It's easier to fabricate than atomic-scale devices, but still allows us to place a million qubits on a square millimetre."

What this new flip-flop qubit means is a balance that could make future quantum computers small and potentially affordable.

"It's a brilliant design, and like many such conceptual leaps, it's amazing no-one had thought of it before," says Morello.

The research has been published in Nature Communications.


SOURCE
 

Attachments

  • FlipFlopQubit2_web_1024.jpg
    FlipFlopQubit2_web_1024.jpg
    37.8 KB · Views: 85
Last edited:
Life existed on Mars, shocking discovery suggests

...

Life existed on Mars, shocking discovery suggests

attachment.php



Scientists have found key evidence which suggests life may once have existed on Mars.

Nasa's Curiosity rover has detected boron, a key ingredient for life, on the dusty surface of the Red Planet.

The discovery is a huge boost in the hunt for extraterrestrials and could back up a theory suggesting life on Mars may have been forced underground when disaster turned the planet into a "frigid desert".

Patrick Gasda, a postdoctoral researcher at Los Alamos National Laboratory said: "Because borates may play an important role in making RNA - one of the building blocks of life - finding boron on Mars further opens the possibility that life could have once arisen on the planet.

"Borates are one possible bridge from simple organic molecules to RNA. Without RNA, you have no life.

"The presence of boron tells us that, if organics were present on Mars, these chemical reactions could have occurred."

RNA is ribonucleic acid, a nucleic acid present in all modern life which is involved in the decoding and expression of genes from DNA.

It is known to be unstable, so unless boron is present it decomposes quickly.

Gasda's work is detailed in a study published this week in the journal Geophysical Research Letters.

It describes how Nasa's buggy found the element in calcium sulphate mineral "veins" in the rocky surface.

That means boron was present in Mars groundwater and indicates that the Gale crater, where Nasa's robo buggy is right now, may have been home to life.

It bolsters the bizarre theory that life originated on Mars and was carried to Earth on an asteroid.

Astronomer Caleb Sharf has previously claimed: "We can find pieces of Mars here on Earth and we suspect that there are pieces of Earth on Mars.

"If that material can carry living organisms on it, it's possible that we are Martian."

These hypotheses have forced bonkers scenarios in which officials have asked Nasa experts whether life existed there in recent times.

Dana Rohrabacher, an American senator, publically asked a project scientist overseeing Nasa's Mars 2020 rover mission if aliens ever lived on the Martian surface.

He quizzed: "You have indicated that Mars was totally different thousands of years ago.

"Is it possible that there was a civilisation on Mars thousands of years ago?"

Nasa's Ken Farley responded: "So, the evidence is that Mars was different billions of years ago, not thousands of years ago, and there is no evidence I'm aware of..."

However, there soon may be life on Mars if tech entrepreneur Elon Musk has his way.

The Space X founder has announced plans to put humans on the surface of the Red Planet by 2030.

SOURCE
 

Attachments

  • life on mars01.gif
    life on mars01.gif
    249.8 KB · Views: 86
The mysterious Voynich manuscript has finally been decoded

.........

The mysterious Voynich manuscript has finally been decoded

attachment.php



The finding, which points to an absolutely middling affair,
is totally disappointing, given the kind of attention and discussion the mysterious manuscript has generated in the past. Oh, well, one less world mystery at least, and back to boring life, say many....


Since its discovery in 1912, the 15th century Voynich Manuscript has been a mystery and a cult phenomenon. Full of handwriting in an unknown language or code, the book is heavily illustrated with weird pictures of alien plants, naked women, strange objects, and zodiac symbols. Now, history researcher and television writer Nicholas Gibbs appears to have cracked the code, discovering that the book is actually a guide to women's health that's mostly plagiarized from other guides of the era.

Gibbs writes in the Times Literary Supplement that he was commissioned by a television network to analyze the Voynich Manuscript three years ago. Because the manuscript has been entirely digitized by Yale's Beinecke Library, he could see tiny details in each page and pore over them at his leisure. His experience with medieval Latin and familiarity with ancient medical guides allowed him to uncover the first clues.

After looking at the so-called code for a while, Gibbs realized he was seeing a common form of medieval Latin abbreviations, often used in medical treatises about herbs. "From the herbarium incorporated into the Voynich manuscript, a standard pattern of abbreviations and ligatures emerged from each plant entry," he wrote. "The abbreviations correspond to the standard pattern of words used in the Herbarium Apuleius Platonicus – aq = aqua (water), dq = decoque / decoctio (decoction), con = confundo (mix), ris = radacis / radix (root), s aiij = seminis ana iij (3 grains each), etc." So this wasn't a code at all; it was just shorthand. The text would have been very familiar to anyone at the time who was interested in medicine.

Further study of the herbs and images in the book reminded Gibbs of other Latin medical texts. When he consulted the Trotula and De Balneis Puteolanis, two commonly copied medieval Latin medical books, he realized that a lot of the Voynich Manuscript's text and images had been plagiarized directly from them (they, in turn, were copied in part from ancient Latin texts by Galen, Pliny, and Hippocrates). During the Middle Ages, it was very common for scribes to reproduce older texts to preserve the knowledge in them. There were no formal rules about copyright and authorship, and indeed books were extremely rare, so nobody complained.

Once he realized that the Voynich Manuscript was a medical textbook, Gibbs explained, it helped him understand the odd images in it. Pictures of plants referred to herbal medicines, and all the images of bathing women marked it out as a gynecological manual. Baths were often prescribed as medicine, and the Romans were particularly fond of the idea that a nice dip could cure all ills. Zodiac maps were included because ancient and medieval doctors believed that certain cures worked better under specific astrological signs. Gibbs even identified one image—copied, of course, from another manuscript—of women holding donut-shaped magnets in baths. Even back then, people believed in the pseudoscience of magnets. (The women's pseudoscience health website Goop would fit right in during the 15th century.)

The Voynich Manuscript has been reliably dated to mere decades before the invention of the printing press, so it's likely that its peculiar blend of plagiarism and curation was a dying format. Once people could just reproduce several copies of the original Trotula or De Balneis Puteolanis on a printing press, there would have been no need for scribes to painstakingly collate its information into a new, handwritten volume.

Gibbs concluded that it's likely the Voynich Manuscript was a customized book, possibly created for one person, devoted mostly to women's medicine. Other medieval Latin scholars will certainly want to weigh in, but the sheer mundanity of Gibbs' discovery makes it sound plausible.

See for yourself—you can look at pages from the Voynich Manuscript here.

More explanatory content at the SOURCE
 

Attachments

  • voynich2-800x523.png
    voynich2-800x523.png
    762.8 KB · Views: 87
Brain inflammation is connected to virtually all types of mental illness

...
Brain inflammation is connected
to virtually all types of mental illness


attachment.php

Are you suffering from any type of mental illness? Does your family have a long-running history of mental illness? If your answer to one or both is yes, then read on. What you will find here might offer a window of answers and solutions to help you and your loved ones.

Extensive research has shown that brain inflammation is connected to virtually all types of mental illness. Mood disorders such as depression and anxiety, as well as more serious conditions like autism, dementia, and even schizophrenia, have all been linked to inflammation of the brain.

Inflammation is also a contributing factor in such health issues as cardiovascular disease, asthma, and allergies; autoimmune diseases like arthritis and hypothyroidism may be influenced by inflammation as well. Please see Dr. Dave's blog on Psycho-Neuro-Immunology for more details.

What do we mean by inflammation, and why does it affect us negatively in so many ways?

Inflammation is an immune system response to environmental irritants, toxins, and infection. When the immune system is activated by one of these intruders, pro-inflammatory hormones signal the white blood cells to rush in and clean up the infected or damaged tissue. Once the invaders have been subdued, anti-inflammatory agents move in to begin the healing process.

In a normal immune system, a natural balance exists between inflammation and the anti-inflammatory agents. But in some cases, the immune system gets stuck in high gear, and symptoms of inflammation do not recede. This is known as chronic inflammation.

Inappropriate inflammation over a long period of time can lead to the damage or destruction of tissue; this tissue damage can ultimately lead to cardiovascular disease; cancer; neurodegenerative diseases such as Alzheimer's and Parkinson's disease and other forms of dementia; ADHD; autism; and mood disorders like anxiety and depression.





If you have had or are currently experiencing more than a few of the following health issues, this may be a sign that you have chronic inflammation:

  • Seasonal/Environmental Allergies
  • Frequent colds, infections, sinusitis
  • Asthma or bronchitis
  • A history of frequent cold sores or canker sores
  • Acne, eczema, or skin rashes
  • Hepatitis
  • Exposure to environmental toxins (pesticides, heavy metals, industrial chemicals)
  • A work environment with poor lighting or ventilation
  • Food allergies, sensitivities
  • Inflammatory bowel disease or colitis
  • Spastic colon
  • Osteoarthritis
  • An autoimmune disorder (rheumatoid arthritis, hypothyroid disease, lupus)
  • Cardiovascular disease, including a history of heart attack
  • Type II diabetes or obesity
  • Alzheimer's or Parkinson's disease, or a family history of either
  • ADD/ADHD
  • Autism
  • Mood or behavioral disorders (depression, anxiety disorders, etc.)
  • Consumption of more than 3 alcoholic beverages per week
  • Sedentary lifestyle, or less than 30 minutes of exercise 3X weekly.


If you are experiencing more than seven of these warning signs, you should be tested for inflammation.

Causes of Inflammation
What causes body and brain inflammation?

Digestive Imbalances

There is a great deal of evidence which suggests that inflammation has its roots in the gastrointestinal (GI) tract. The digestive system is designed to remove toxins, bacteria and viruses from our food before it has a chance to reach the rest of the body; the GI tract is the body's first line of defense against infection and disease.

Unfortunately, the digestive tract is often overwhelmed by what we put into it. Poor nutrition, medications, stress, and environmental toxins can damage the gut and cause inflammation, which is then free to spread unchecked throughout the rest of the body.

Regular consumption of foods such as the following are largely responsible for inflammation:

  • Refined sugars
  • Processed and refined flours (white bread, cookies, pasta, crackers, and more)
  • Foods high in acids
  • Dairy products
  • Animal fats
  • Caffeine
  • Alcohol
  • Food Allergens (hidden food allergies cause body and brain inflammation)


Environmental and lifestyle factors also affect inflammation:

  • Exposure to toxic metals (mercury, lead, cadmium)
  • History of infections
  • Environmental toxins (pesticides, herbicides, food additives and preservatives)
  • Chronic stress
  • Lack of exercise, sedentary habits
  • Nutritional deficiencies (B12, vitamin D, essential fatty acids, vitamin C)
  • Overuse of antibiotics and acid blocking medications
  • Poor sleep habits.


Lab Tests for Inflammation

  • The C-reactive protein test is the most decisive test for detecting inflammation. This simple blood test can reveal high levels of C-reactive protein; CRP is produced by the liver in response to inflammation, infection and injury. It is available by fingerstick or blood draw.
  • Food allergy testing can uncover immune responses which may point to inflammation.
  • An inflammatory cytokine test can reveal evidence of inflammation, as well. Cytokines are proteins which act as messengers between immune cells to trigger an inflammatory response.


Inflammation Reduction Strategies
The good news is that there are several things you can do to promote your body's natural anti-inflammatory response and restore natural balance to your immune system.

  • Exercise stimulates your body's anti-inflammatory abilities and keeps your blood circulating at its optimum level. Start off slowly, and work your way up until you are getting 20 to 30 minutes of exercise at a minimum of 3 times per week.
  • Rest and stress management are vital to keeping your immune system in proper working order. Make sure you get enough sleep, and find a relaxation technique that you enjoy. Deep-breathing exercises are also an excellent restorative approach, do them 5 times a day, everyday.

Eat An Anti-Inflammatory Diet

  • Make sure you get plenty of Omega-3 fatty acids. These oils are in short supply in our diet, and most people require a supplement to ensure they are getting enough Omega-3's in their system.
  • Avoid eating saturated fat by eating less butter, cream, cheese and other full-fat dairy products;
  • Avoid regular safflower and sunflower oils, corn oil, cottonseed oil, and mixed vegetable oils.
  • Avoid margarine, vegetable shortening, and all products listing them as ingredients. Avoid products made with partially hydrogenated oils.
  • Eat avocados and nuts, especially walnuts, cashews, almonds, and nut butters made from these nuts.
  • Good sources of omega-3 fatty acids are salmon (preferably fresh or frozen wild or canned sockeye), sardines packed in water or olive oil, herring, omega-3 fortified eggs; hemp seeds and flaxseeds (preferably freshly ground); or take a fish oil supplement.
  • A high-alkaline diet - one that includes plenty of green, leafy vegetables - is invaluable in combating inflammation.
  • Eat cruciferous (cabbage-family) vegetables regularly. Eat plenty of organic brightly colored fruits. Drink pomegranate juice, and green tea daily for their anti-oxidant effects.
  • Eat more vegetable protein from soy products such as tofu, edamame, soynuts, and soymilk.
  • Whole grains, brown rice, and bulgur wheat are less inflammatory than white flour products.
  • Stay away from refined foods. Added sugars, convenience foods, and refined carbohydrates provide little nutritional value and provoke inflammation. In other words do not eat anything that comes from a bag or a box.

Anti-inflammatory supplements
Bioflavonoids and other beneficial anti-inflammatory herbs are gentle on your system and extremely useful in reducing inflammation.

  • Digestive Enzymes
  • Turmeric
  • Omega 3 Fatty Acids (fish oil)
  • Anti-inflammatory powder drinks
  • Herbal anti-inflammatory supplements
  • List of products here

By making a few changes to your diet and lifestyle, you can find yourself feeling energized, refreshed, and filled with a sense of well-being. By taking steps to reduce inflammation, you will greatly improve the quality of your life.

==========
SOURCE


ADDITIONAL USEFUL INFORMATION:


  1. Surprising new link between inflammation and mental illness
  2. Depression is a physical illness which could be treated with anti-inflammatory drugs, scientists suggest
  3. Brain Immune Cells May Have a Role in Psychiatric Disorders
  4. Researchers: Depression May Be a Physical Illness Linked to Inflammation
  5. Inflammation in the brain is linked to risk of schizophrenia, study finds
  6. Brain's immune cells linked to Alzheimer's, Parkinson's, schizophrenia
  7. Scientists shocked to find antibiotics alleviate symptoms of schizophrenia
  8. Common acne medication offers new treatment for multiple sclerosis
  9. Lupus Symptoms Linked To Inflammation In The Brain
  10. Massive ketamine study reveals Botox also works against depression
  11. The Concept of Schizophrenia Is Coming to an End – Here's Why
 

Attachments

  • shutterstock_235279540-1000x480.jpg
    shutterstock_235279540-1000x480.jpg
    58.1 KB · Views: 71
Re: Brain inflammation is connected to virtually all types of mental illnes

attachment.php

In what has been hailed as a paradigm shifter, a new study confirms that the Toxoplasma Gondii (T Gondii) parasite really is the stuff of horror movies.

The parasite, once in the brain of rats and mice, mutes their fear of cat urine, bringing them to their furriest predator on a figurative plate. The parasite, in other words, gets to its desired destination by using the host as a kind of suicidal Uber.

But the pesky parasite doesn’t stop there. It can – and does – infect humans, usually via cat litter or uncooked meat. As many as two billion people are said to be infected with the parasite, although it’s unlikely that is what T Gondii intends, given we’re (for now) at the top of the food chain. That doesn’t mean the parasite doesn’t impact us, though. Evidence suggests it’s linked to all kinds of subtle changes affecting risk and extroversion, and has even been linked to an increased chance of suicide or developing schizophrenia.

And it could be worse than we feared, according to new research from 16 institutions, which argue the parasite might alter – and sometimes amplify – a number of brain disorders including epilepsy, Alzheimer’s, Parkinson’s as well as some forms of cancer.

The researchers came about their findings by analysing data from the Congenital Toxoplasmosis Study, which has been monitoring 246 infants with congenital toxoplasmosis (the infection T Gondii triggers). Fragments of microRNA and proteins found in those severely affected matched up with biomarkers associated with patients suffering from a range of neurodegenerative conditions such as Parkinson’s and Alzheimer's. The epilepsy link, on the other hand, is down to the way the parasite disrupts communication between GABAergic neurons. The cancer connection is possibly the most disturbing, as the researchers found a link between T Gondii and nearly 1,200 human genes linked to various types of cancer.

“We suspect it involves multiple factors,” explains the University of Chicago’s Rima McLeod. “At the core is alignment of characteristics of the parasite itself, the genes it expresses in the infected brain, susceptibility genes that could limit the host's ability to prevent infection, and genes that control susceptibility to other diseases present in the human host.”

It’s not that the researchers are arguing that T Gondii causes Alzheimer’s, Parkinson’s, epilepsy or cancer – just that a side effect of the parasite’s habit of meddling with brain proteins is possibly making people more susceptible to them. That, though, is worrying enough and changes how we approach all of them.

“This study is a paradigm shifter,” said the paper’s co-author Dennis Steindler, director of the neuroscience and aging Lab at Tufts University. “We now have to insert infectious disease into the equation of neurodegenerative diseases, epilepsy and neural cancers.

“At the same time we have to translate aspects of this study into preventive treatments that include everything from drugs to diet to lifestyle, in order to delay disease onset and progression.”

Some estimates reckon that half the world’s population will be infected with T Gondii eventually, so more research into the parasite’s side effects is certainly welcome. A severely infected cat can excrete as many as 500 million parasite-carrying oocysts, and just one of those is infectious – even if it's been left in soil or water for a year. Nature is horrible, isn’t it?

SOURCE


======================
Real-life puppeteers: six terrifying organisms that can control other creatures’ minds
 

Attachments

  • cat-cat01-final.jpg
    cat-cat01-final.jpg
    116.3 KB · Views: 67
attachment.php


Scientists say the future of computing depends on it, and now for the first time ever, they have stored light-based information as sound waves on a computer chip - something the researchers compare to capturing lightning as thunder.

While that might sound a little strange, this conversion is critical if we ever want to shift from our current, inefficient electronic computers, to light-based computers that move data at the speed of light.

Light-based or photonic computers have the potential to run at least 20 times faster than your laptop, not to mention the fact that they won't produce heat or suck up energy like existing devices.

This is because they, in theory, would process data in the form of photons instead of electrons.

We say in theory, because, despite companies such as IBM and Intel pursuing light-based computing, the transition is easier said than done.

Coding information into photons is easy enough - we already do that when we send information via optical fibre.

But finding a way for a computer chip to be able to retrieve and process information stored in photons is tough for the one thing that makes light so appealing: it's too damn fast for existing microchips to read.

This is why light-based information that flies across internet cables is currently converted into slow electrons. But a better alternative would be to slow down the light and convert it into sound.

And that's exactly what researchers from the University of Sydney in Australia have now done.

"The information in our chip in acoustic form travels at a velocity five orders of magnitude slower than in the optical domain," said project supervisor Birgit Stiller.

"It is like the difference between thunder and lightning."





First, photonic information enters the chip as a pulse of light (yellow), where it interacts with a 'write' pulse (blue), producing an acoustic wave that stores the data.

Another pulse of light, called the 'read' pulse (blue), then accesses this sound data and transmits as light once more (yellow).

While unimpeded light will pass through the chip in 2 to 3 nanoseconds, once stored as a sound wave, information can remain on the chip for up to 10 nanoseconds, long enough for it to be retrieved and processed.

The fact that the team were able to convert the light into sound waves not only slowed it down, but also made data retrieval more accurate.

And, unlike previous attempts, the system worked across a broad bandwidth.

"Building an acoustic buffer inside a chip improves our ability to control information by several orders of magnitude," said Merklein.

"Our system is not limited to a narrow bandwidth. So unlike previous systems this allows us to store and retrieve information at multiple wavelengths simultaneously, vastly increasing the efficiency of the device," added Stiller.

The research has been published in Nature Communications.


SOURCE
 

Attachments

  • ball-65825_1920_web_1024-final.jpg
    ball-65825_1920_web_1024-final.jpg
    138.3 KB · Views: 105
Back
Top Bottom