preloder

Human [vs/and/part] Machine

CATEGORY / Science, Technology TAGS / cyborg, free, free will, will AUTHOR / Tio DATE / 23/07/2015

[showhide type="post" more_text="" less_text="Hide Plain Text"]

Over the last 150-200 years or so, humans have invented many tools that are so sophisticated that ‘they’ become more and more similar to humans in many of their functionalities, surpassing humans at many tasks, helping them at many more, and enhancing what it is to be human.  Tools are not only something that humans use anymore, they are transforming what it means to be human.  The entire society relies on machines for measurements and calculations, transportation, delivery, communication and more, human parts are being replaced with these tools for better health, and overall, nearly everything that has to do with humans has to do with the tools they invented.

In this book we will look at how these tools surpass humans on many levels, how they are part of what humans are, and also why the distinction between human-made tools (the machines) and ourselves (biological creatures) is blurry.  We will also look at the possibility of these machines becoming human-like: thinking, feeling, full of emotions, moods, and ‘creativity’.

The entire book will also focus on the practicality of these tools and what we can do with them: from enhancing health while creating an abundance of treatments and cures, replacing boring, dangerous and repetitive jobs, enhancing our own senses, and more.

We will also present the real value, the scientific one, of these tools, and try to demystify the juicy news/articles/documentaries that focus of such tools: nanobots, artificial intelligence, artificial body parts, etc..

It will be a very interesting journey, full of the latest tech, different perspectives about technology and human behavior, games/puzzles and some science that will help you rethink what you are, and much, much more.

Parts:

  1. Human vs Machine - Many scientists, economists and tech-savvy people have been focusing lately on technological unemployment, while many newscasts increasingly point out that robots/machines are steadily taking jobs away from people.  Although they present certain key points as to why and how this is happening, we will not present similar ‘job-related’ arguments, but will instead analyze the various skills that humans possess: from their vision to strength, creativity to memory, mobility and flexibility, to see just how well they stack up against today’s best machines.  If machines can see better, are more flexible, and are better able to deal with more information and tools than humans, then it becomes very obvious as to why it’s a good thing that machines will continue to be used in place of humans, adopting all sorts of jobs that humans currently handle.
  2. Part Human, Part Machine: Replacements - We look into how we can replace nearly all parts of the human body with mechanical alternatives: from lungs, heart, limbs, spleen, to eyes or even parts of the brain.  We look at almost all of them, providing cutting-edge examples and the results of recent clinical trials for the devices we present.  We also might surprise you with an explanation of ‘how’ we see and why that is very important when considering the replacement of eyes with mechanical devices.  As a bonus, we’ll even show you people who can hear colors and others who see sound.
  3. Part Human, Part Machine: Enhancements - In this part, we look at how we can enhance what we are - our biology.  We look at nanobots, explaining what they are and what they can do today; how can we add new senses, from ‘sensing’ distances or impending earthquakes, to allowing the deaf to ‘sense’ words.  We will help you significantly rethink the way you ‘sense’ the world, and how enhancing our senses can dramatically improve communication and expand our understanding of reality.
  4. Human-Machine - In this installment, we cover how our own bodies are themselves machines and how, by understanding this, we can actually grow organs, print body parts, and create an abundance of very personalized medical treatments, all working together to solve the very important issue of health scarcity around the world.
  5. Artificial (or not) Intelligence, Randomness and Free Will - Have you ever wondered what “artificial intelligence” really means, or how it works?  We demystify it for you, looking at how cars can drive themselves, how software is now able to recognize faces, or play video games and conduct research, and if this software could become dangerous.  But all of that may be a bit of a trick, because the part is also about you, as we question human behavior and explain how to predict ‘randomness’.  We also have some intriguing games for you, interesting perspectives and some very real science that will make recent news titles about artificial intelligence look quite stupid.

Author: Tio

Human beings are extraordinary creatures.  Just think of the machines they built, the discoveries they made, and the continual, steady progress of this thing they call ‘science’.  They can look back billions of years into the abyss of the universe through telescopes and mathematical formulas, manipulate atoms and even enhance their biology.  However, the human being, the individual, is extremely obsolete without the tools he invented.  And when I say obsolete, we’re talking in terms of the kinds of jobs that are required in today’s monetary system.  From their arms and legs to their brains and varied skills, it seems obvious that humans have become surpassed by machines that can do far better jobs, even without any human control/involvement.

So, what if we take all of the top tools the human invented and compare them to the bare-naked human creature?  From their vision to dexterity; from memory to creativity, would humans stand any chance against their machines?

Hearing and Sniffing

If you currently rely on humans, with their little ears and tiny noses, to be detectors of any sort of sounds and odors, then you would be better off hiring a cow, as it hears and detect odors better than any human can.  Actually this is the same reason why dogs are often used to detect odors (dangerous chemicals, drugs, gunpowder, etc.) and not humans.  But even well-trained dogs are being systematically replaced with robots that are continually getting better at ‘sniffing’ a variety of ‘smells’.

Gasbot is one such robot, used for detecting and mapping bio-gas emissions at landfill sites.

It can:

  • Localize itself and navigate in semi-structured environments, both indoor and outdoor
  • Produce models of the gas distribution
  • Detect and localize gas sources

When it comes to hearing, check out  this auditory illusion to see how very easily humans are tricked by what they hear, depending on what they watch when they hear it.

Today, a plethora of devices exist that are used to detect even the slightest sounds, or are unharmed by the loudest of them.  The human ear can be easily damaged by loud noise, and is completely deaf to most of the sound frequencies that can be detected by human devices.  Even when compared to other animals, humans are quite deaf.

Thus, relying on human’s hearing and ‘sniffing’ abilities is either antiquated, or was never really relied upon in the first place.

Arms and Touch

Human arms are fantastic tools.  Because of them, we have mice and keyboards, space shuttles and supermarkets, clothes and written language.  However, for the past 50 years since the development of modern day technologies, human arms are being systematically replaced by a variety of mechanized arms: from construction to writing, from production of any sort of products to machinery control.

We already have robots that can pretty much manufacture anything from the microscopic to the macroscopic.  Looking at the huge variety of robot arms that currently exist, exhibiting so many sophisticated movements and control, human hands are already looking like ‘old’ tools.  We have robot hands with 360 degree joint rotation, ‘n’ fingers with fine sensitivity to pressure and temperature, simulating our touch sensation.  They are extremely robust, and come in so many shapes, forms and materials.  You can read our book on automation to see many examples that currently exist, so we won’t go through all these examples again in this book.

When it comes to relying on human hands to handle complex tasks, you can easily replace them with mechanical arms/tools.  No human hands can screw a screw, but a screw driver can do that without any human hands.  In today’s world, human arms are almost useless  without tools, and many of the tools can be automatically controlled by various systems or robot arms.

But we also write with our mouths or control devices with our brains.  You don’t need ‘a human hand’ these days to create something.

Stephen Hawking, a very influential scientist who has a rare form of ALS that makes him unable to move, manages to write books, scientific papers, develop new formulas, and ‘talk’, using only the movements of his cheek and very little movement of one of his hands.

Voice

Speaking of voice :), text-to-speech software has been gradually getting a more and more natural voice over time.  Sometimes it is hard to tell the difference between a synthesized voice and a human voice.  One example is the IVONA voices collection.  Listen to this short demo to hear for yourself. You can also go to ivona.com to listen to demos in more languages.

Imagine such software reading a story to your children or narrating documentaries into any language, or providing a voice for a character in an animated movie or game - and all of that available in both male or female voices, in multiple languages and accents.

Mobility and Reaction

Humans generally have no problem standing up.  They can climb stairs, run, climb trees and react extremely quickly.  Imagining a robot that can do all of that is a bit difficult, since the best robot out there that can perform such tasks that are small and easy for a human is extremely slow and very inflexible compared to a human.  However robots are continually improving, as this series of DARPA robots attest while showing great mobility in many different circumstances: https://www.youtube.com/watch?v=9qCbCpMYAe4

Robots can now walk, run, climb stairs, maintain their equilibrium in tough situations, and more.  Do not forget though that when we think of robots as clumsy, it’s because we so often test them in our human-centric world, a world full of chairs and stairs, doors and floors, and lots of walls.  Thus, the mobility of a robot can be made substantially better, considering a robot can be provided with various types of propulsion, such as wheels, legs, wings, the ability to hover in the air, and more.

Try to swim faster, or otherwise out-perform a robot designed to move through water.  Or try to outrun a robot with wheels.  There is even a robot with ‘legs’ that can outrun the fastest man on Earth.

Human reaction time may seems very quick, but just take a look at this experiment to see what our human reactions look like in slow motion.  Then watch this one, with a robotic hand that is far superior at reaction time and dexterity than any human hand can be.

EPFL recently developed a robot hand that is 3-6 times faster than the average human eye-hand reaction.  The robot uses a high speed camera for detecting objects and is simply programmed by manually pointing the hand at the object.   The robot then recognizes the movement and adapts to catching the object tossed at it.  Watch a demo video.

Strength and Durability

The strongest man on earth can lift around 3 times his own weight.  A dung beetle can lift a thousand times its own weight.  A machine we know how to build can lift...well, perhaps an unlimited amount of weight.  The days when humanity had to rely on human muscle power are long obsolete.  A human is also prone to diseases, and a human needs breaks and food.  A machine can work non-stop, without breaks, and is far more durable than any human.

On land, the NASA crawler-transporter can transport loads over 9000 tons, meaning it can transport the entire Eiffel tower.  NASA’s crawler-transporter is designed to be very slow, but this truck is much faster and can transport 400 tons at once.  That is, it can transport two huge blue whales at once.

This huge monster is almost 100 meters (328 feet) tall and 225 meters (738 feet) long.  It is used for digging and transporting earth (materials) and can transport 4 times the volume of the largest swimming pool on Earth, every day.  https://www.youtube.com/watch?v=cocg1u0nwbI

The largest swimming pool in the world is so big that you can sail small boats inside its area.

This machine, known as a ‘mole’, can drill holes up to 19 meters in diameter, through solid rock.

On water, machines can transport even bigger loads.  This water ship is 4 football fields long and can transport not just one Eiffel Tower, but 66 of them!  And in the air, the largest aircraft can transport not 2 blue whales, but 3 big ones, plus 6 or so large african elephants.

Vision

Our vision is not only limited to the eyes, but instead is about the eyes and the brain.  So are our other senses, but for the sake of example, let’s keep this simple.

Have you been out today?  If so, I bet you came across many people.  How many faces do you remember?  Perhaps none, because the way we see is quite poor.  Our eyes can only focus on their center point, and our overall attention is very limited.  Watch this video to test your selective attention.

If you stretch your arms out at 180 degrees and then look straight forward, you will probably not see your arms anymore.  More to that point, if you focus on a single word in this text, you will soon realize how the words near it become more and more blurry the farther they are from the centered word, until they just dissapear from your field of view.

With all that you ‘see’ every day, only a very small spot in your field of vision is sharp, while the rest is blurry and parts of it are colorless.(source)

Even a relatively cheap camera nowadays can capture a 360 degree video, and it has no blind spots or loss of color.  You can understand this 360 degree capability by watching this short video https://vimeo.com/91509966

How much can you zoom in on this photo with your eyes?  Can you spot the yellow kayaks?

http://www.gigapan.com/galleries/11203/gigapans/152220

Focus hard, they are here somewhere.

There are drones that survey areas from higher than a 5 km altitude (around 3 miles) and, from there, can spot a pigeon flying close to the ground.  They can also stream live footage to the ground and detecting/tracking all moving objects from cars to people.

https://www.youtube.com/watch?v=QGxNyaXfJsA

The human eye also does a pretty bad job in low light conditions.  It takes a while for our eyes to adjust and, even once they do, on a very dark night, we can maybe spot 2-5 thousands stars under almost perfect conditions (low pollution, no clouds, no mountains, etc.).  Think about how many stars you see when you look up, and then look at this photo taken with a relatively affordable camera.  I’m sure your eyes do not come anywhere near close to seeing that many stars and details.

This is what your room may look like to your eyes under low light conditions, once your eyes become adjusted.  This is what it looks like to a small $2.5 thousand camera, which is 8 times more sensitive than a human eye.

Actually, any night security camera is far better that the human eye in low-light environments, not to mention that humans see/sense only a tiny fraction of existing lightwaves, while cameras and other devices can be designed to cover a huge range of such frequencies (perhaps all of them when combined), including infrared which allows you to ‘see’ in complete darkness, since it ‘senses’ the heat emitted by individual elements of ‘the world’ (creatures, rocks, etc.).

Have you ever tried to catch a fly with your hand?  If so, you probably recognize that it’s very difficult to do, and that’s because a fly sees in a different way than you see.  A fly can see 10 times faster than humans.  When you watch a movie, you typically experience 30 photos (frames) per second, while your eyes and brain interpret that as continual movement (a movie).  A fly would not enjoy such a movie because it needs around 300 frames per second to see it as a movie, rather than a photo slideshow.

If 300 frames per second seems like a lot, there is now a camera that captures 100 billion frames per second.  Think about that!

https://www.youtube.com/watch?v=Y_9vd4HWlVA

So, would you prefer to hire a human being for his visual abilities?  Can a human still be a better security guard than modern day technologies?  Or maybe better at observing any kind of event and be better able to spot relevant information out of what he sees?  Of course not.  Human vision may have been the greatest tool on the planet 100 years ago, but with the advent of photo/video cameras and other devices that can capture different light wavelengths, and at much higher resolution & speed, human vision has become completely surpassed for this kind of duty.

But still, humans are better at recognizing objects and situations, right?  Well, yes.  They are still better at differentiating between cats and mice, types of cars, maybe even faces and other such ‘objects’/shapes - or are they?

So, let’s look at the brain.

Brain and Creativity

Our brains are fantastic.  No other creature has a brain that can match our capabilities.  However, we are already surpassed by computers in many areas where the human brain had reigned supreme in the past.

In school, we are told to memorize information, however the internet ‘stores’ far more than a brain can.  When was the last time you searched for something on google?  Why didn’t you search inside your brain?  It’s because you simply don’t know most things.  Let me emphasize that again, most of the information and knowledge that is discovered through science, you and I are not at all aware of.  That is simply because it is far too much information for anyone to retain and recall.  Long gone are the days when any advanced human society relies on people to retain information for a particular job.  Or at least those days should be long gone, as only an obsolete system may still require such skills.

How long does it take you to read an average-sized book?  A couple of days maybe?  What if the book had 10 billion pages?  Even if you read 1000 pages a day (which is insane), it will take you 10 million days to finish the book.  That’s around 27 thousand years of continuous reading.  You would have had to start back at a time when there were no or few humans in North and South America in order to finish that book today.  The IBM Watson computer can do that in 43 minutes.  Not only can this computer scan 10 billion files in 43 minutes, but it can also draw very powerful conclusions to help with diagnosing diseases, understand natural language, and even come up with unique recipes.(source)

The trend with computers today is the big data that it is gathered daily.  From smart health tracking devices to facebook posts, youtube videos, blogs, security cameras, and smart fridges, a huge amount of data is created every day.  So huge that if you add a 100gb hard drive to your computer, you would need 25 million more of them to store all of the data that it is produced in a single day.(source)  Imagine the entire population of Australia, every single person living there, having a 100gb hard drive full of data.  That is how much new data is produced every day.

That is the key for how smart computers have become: big data.  The type of computing that can mine all of this data is called cognitive computing.  Many consider what we are experiencing with cognitive computing as a new era in computers.  First came mechanical systems that counted things (1900).  Those machines evolved into electro-mechanical devices over time.  In 1950, there was a major shift where these types of systems switched over to programmable systems, the ones that we still use now.  You program these machines to do tasks (like apps on your smartphone), and they do them.  However, many experts claim that in 2011, another switch happened and we are now in the embryonic phase of it; an era where computers actually learn, becoming smarter with time.  The interesting thing about this new kind of computing is that it learns like a human being, through examples and repetitions.  And the more data you feed into it and the more you allow it to learn, the ‘smarter’ it becomes.  There is nothing ‘magical’ about this, since it’s basically following a bunch of statistics and rules, coupled with the ability to understand natural language.  These computers read, literally, billions of documents, looking for patterns to highlight.

The only way to adequately explain these new computer systems is to give you an example: Let’s say you want to book a trip to a place where the temperature is not too hot, but not too cold.  You want the trip to occur in 2 months time.  You want the hotel to have a swimming pool, sushi in the menu, and you’ll bring your wife and 2 kids with you.  You also want to do scuba diving to see some coral reefs while you’re there, and the kids want to enjoy a rollercoaster ride.  For the sake of providing a present-day example where we use money for barter, you also have a budget in mind for your trip.

In today’s world, how would you go about trying to find such a location?  Maybe you could start by asking people around you, although they know very little about the world and such places, or hunting through many holiday-planner websites where you can select certain keywords and categories, but not come anywhere near as specific as what you have in mind for this trip.

Now here comes cognitive computing with an IBM Watson-like app, where all you need to do is to say, using natural language, what you want from the trip, as exemplified above.  The app searches through wikipedia, facebook and twitter posts, tripadvisor websites, and other digital sources, interprets the data in a comprehensive way, and finds the perfect location for your holiday.  It’s as simple as that.

You can apply the same approach for finding a diagnosis for your symptoms, learn about anything you want to, or just ask any kind of question to be provided with relevant advice.

These systems are already tested and functional, but not yet widely available for public use.

Understanding natural human language (how we speak) is the key for fast development of such computers, as natural language is the main source of unstructured information.  80% of the ‘25 million 100gb hard drives worth of data that it is produce daily’ is in the form of this kind of untapped and unstructured data.(source)

As the original inventor of the software behind the IBM Watson computer pointed out in this TED talk, even though the software has not changed much over the past several years, the big change has been in the data that the software can tap into.  The more data it is provided, the more associations and connections it can make, resulting in better statistics.  Computers can now understand natural human written language and even translate it from one language to another or recognize human speech.  And while they are not perfect, the rate at which they continually improve is phenomenally quick.

At present, they are at just 1% accuracy in recognizing objects from photos when compared against experts, and at over 97% accuracy at recognizing human faces (better than humans).

There are computers today with millions of nodes and billions of connections, although the human brain has billions of nodes and trillions of connections.  However, based on Moore's law (the observation that the number of transistors in a dense integrated circuit doubles approximately every two years - and we have been experiencing that for decades), we will reach the human brain’s capacity of nodes and connections within just 25 more years.  You and I, if you are not too old 🙂 and don’t get hit by a car and die, will still be alive to take advantage of this huge computational power.

Learn more about the Watson computer and its amazing present day capabilities in this talk.

Human vs Machine - One on one.

Hands down, machines beat humans at so many levels when it comes to memory, decision making, or face recognition (and it’s getting close for object recognition).  It still has difficulties with translation and speech recognition, however, they are literally getting better at those every single day.

Computers can also write stories and news articles (in a very quick and accurate manner), compose songs, poetry, or even paint.

Keep in mind that when a human writes, he uses his pointy ‘tentacles’ (fingers) to physically push some buttons on a keyboard, or to press the point of a stick while dragging it across a piece of paper.  A machine need’s none of that.

IF: from vision to hearing and odor (and other) sensing; from strength and durability to speed, mobility, decision making and voice recognition/translation/replication; memory and data mining; robots/machines/computers/software is/are already better or close to human capabilities,

THEN: what jobs are left for humans since these machines can drive, be doctors or assistants, in perhaps any domain, function as managers, and can create unique recipes, songs, or articles; build things, maintain them, and make new, important discoveries faster than all of humanity combined?

It’s now easier to think of what humans are still better at handling, meaning what jobs can’t be replace thus far, than to think of what jobs can be replaced.

There are still some domains where humans are better than robots, and these domains tend to not be ‘jobs’ in today’s world, which is a positive note.  Humans seem to be very good at interacting with other humans: providing moral support, teaching, being creative and inventing new things.  Even though robots are starting to become good at reading human emotions, making discoveries on their own out of big data and in lab research, replacing teachers’ interaction with children, or even at the art of ‘debate’, we are far from from becoming useless creatures.  Technology is like a piano, and we are the ones making the music.  Over the past 50 years, ‘jobs’ have become an overly outdated and obsolete ideal, but the concept of ‘work’ is something quite different.

While the use of sophisticated computer systems will surely continue to expand in controlling complex systems like transportation or production, mining big data to arrive at better decisions, discovering new things (from medical treatments to perhaps important mathematical formulas), composing original work (from documentary scripts to music), and more, we humans are the ones for whom all of this is made, and we will be part of it (discovering right alongside them, creating and innovating, enjoying and educating).  We are still the only ones who can look at all this and inject meaning.  No robot will look at the stars and be in awe, asking what is its place in the universe, at least not for many years to come (or maybe never).  No robot will fight for creating an equal society for all or for taking better care of the environment.

Computers, robots, devices and machines are tools, our tools, and we need to take advantage of their abilities without being afraid of them.

Four to five years ago, you could barely find people talking about robots replacing jobs.  Today, it looks like this has become a major concern for many people around the world.  From Bill Gates to Google, Jeremy Rifkin to M.I.T.  professors, Peter Diamandis and well known Youtubers, or thousands of various news titles, the world may finally be recognizing that we, as humans, have been surpassed on so many levels by machinery that is massively more efficient and better designed for these jobs and, as a result, we must think of a different way of organizing a global society that still relies completely on human labor (jobs), just so that people can ‘afford to live’ and so that the people benefitting most from the current approaches can keep on living better than the rest.  The only thing I am afraid of is that there seems to be no real alternatives in any of these people’s minds, as they seem to not think about the bigger picture and thus, they continue to try to solve new problems with the same old, outdated tools and solutions that created the problems, perhaps eventually resulting in a total chaos.

We humans are not becoming obsolete creatures.

It’s just that it’s about time that we start learning how to be fully human,

since for most of recorded human history, we have been doing repetitive machine-like tasks.

In the last part of this book, we compared humans with machines in order to ‘weigh’ which one sees better, is stronger, faster, more reliable, and overall better at handling ‘jobs’ that people are required to do within today’s monetary system.  We did that to highlight just how easily many humans could already be freed-up from boring and repetitive jobs that machines are much better equipped to manage, allowing those humans to instead use their brain to discover, enjoy, relax and improve (their lives, society, etc.).

Today, we will look at how humans use machine-like devices to replace many of their organs and body functions.  This is a vitally important field to understand, as these mechanical alternatives often mean the difference between life and death, while they are also more resilient and performance enhancing, providing their recipients with better health, along with providing a solution for organ donor scarcity.

Mechanical Body Parts for Humans

What ‘organic’ human body parts can we replace with mechanical ones that can render a better outcome, from performance to durability?

Let’s go from toes-to-head, looking closely at legs, stomach and heart, eyes and nose, and everything in between.  Keep in mind that we will only be focusing on non-biological body part replacements here, as we will tackle biological enhancements and replacements in the next part of this book.

Limbs and Movement:

In order for us humans to walk, we need healthy bones, lots of muscles, strength, coordination and flexibility.  To mimic what a leg does, as well as how it communicates with the brain and the rest of the body, turns out to be quite a challenge.  Multiple 3D-printed prostheses have been developed recently and, although they represent a very cheap (in terms of energy and materials) means to quickly replace a missing limb, they are not nearly as advanced as a mechanical prosthesis, because mechanical limbs allow for much more flexibility and adaptability for movement.

One such mechanical leg is Genium X3.  It is waterproof, the battery last 5 days and, more importantly, it detects pressure and its position in space, adapting to different kinds of movements: from riding a bike, running, driving, or even swimming.
https://www.youtube.com/watch?v=wDv-8hrhqOg

Such mechanical legs can even be jointed at the hip via a 3D Hip Joint System that results in a three-dimensional hip movement to compensate for pelvic rotation.  The result is a symmetrical, natural walking pattern.  Watch this video demo to see it in action - https://www.youtube.com/watch?v=zgWRrDTakaY

Thus, the leg becomes complete from the hip, while also serving more capabilities than a normal prosthesis as it acts as a shock absorber, adapts to uneven terrain, provides a smooth rollover from heel to toe, and even allows for multi-axial motion (which means even more mobility and comfort), plus the materials it’s made from give it a ‘spring to your step’, meaning that it compresses when you apply weight and propels you forward as your foot rolls.(source)

Some mechanical lower limbs, like BIOM, are now able to communicate directly with one’s biology to adapt its movements (it can connect directly to nerves to understand how the person wishes to move).  This is a ‘huge step’ towards properly integrating these mechanical devices to a human’s biology with a more natural connectivity.  Imagine wearing a stiff, non-mechanical leg.  How hard would it be to move around?  Keep in mind that you need to feel the pressure on your artificial leg to walk smoothly, you need to have the flexibility of movement to avoid tripping or to change the direction of your walking, and so on.  Today’s mechanical legs can understand how you move and respond accordingly, allowing people without legs to do nearly anything that a natural legged person can do.

As an example, there are people with movement handicaps (missing limbs for instance) that can participate in physically intensive sports at a high level of performance.(source)(source)

As a side note, mechanical legs can be coated with a silicon covering to look almost identical to real legs, as shown in this video - https://www.youtube.com/watch?v=H1-yVu4JJLY

In addition to helping those with missing legs, these machines are also helping those that suffer from paralysis.  Exoskeletons are already in use for such cases.  The exoskeleton ‘senses’ the wearer’s body position, and balance points, triggering movement according to these inputs and, thus, allowing people who otherwise cannot move to walk again.  This technology is still in its early stages, so it is more of a prototype, but will improve significantly over a very short period of time, as most technologies do these days.(source)

Today’s mechanical arms use similar technologies to provide for control and connect to the human body.  Sensors detect muscle movement and tension, or are connected directly to nerves, and that feedback is translated into the robotic arm’s movement.  One extraordinary example is a man who can control two mechanical arms and shoulders, through multiple sensors from the mechanical arms to different nerves on his body.  Even though the arms / shoulders are very complex and able for different kinds of motions, the control system development is still in its early stages, so it’s slow and very simple.(source)

You see, when these mechanical prostheses are attached to the body, the body needs to have well-functioning muscles or nerves to communicate with them.  The brain sends commands to the muscles and nerves, and they, in turn, activate the mechanism of the arm (or leg, or other devices).  If those muscles and nerves are also damaged, then it becomes more difficult to find a solution, although nerve and muscle transplants from a different part of the body are now possible, too.(source)

However a new kind of connectivity between mechanical devices and the human body is increasingly being tested: a direct connection of such devices with the brain, fully bypassing other parts of the body.  To put it simply, this technology is basically reading brain patterns, and then associating them with the movements of a mechanical limb.  So, if you imagine picking up a cup and putting it on a shelf, and then repeat this a couple of times, this technology can directly analyze your brain’s activity, learn your specific brain patterns for that kind of movement, and then translate them into physical movements of the robotic arm.(source)

Imagine the same technology being applied to exoskeletons, mechanical legs, or even used for controlling wheelchairs, driving, and many other devices.  Thus, with only ‘the power of the mind’, it’s now becoming possible for people to control different kinds of devices that allow them to move, reach, grasp, etc..

Another fascinating compliment to this field is artificial muscles.  These are basically pneumatic ‘bladders’, precisely controlled by air flow, that bring more flexible and natural movement to mechanical limbs. Check this playlist showcasing its use in limbs to see how natural movements become when assisted by this technology.

https://www.youtube.com/playlist?list=PLGW2heHH7L3MxOxBNQYRizlpVUQX9Qmlp

While these technologies are generally used to replace missing limbs, they can also enhance performance of existing limbs to ease movement, and improve strength and performance.  Imagine similar devices that may help you walk farther distances, climb under more difficult conditions, or to control devices from a distance with your brain.

Alongside 3D printing, limbs are becoming more easily replaced with mechanical alternatives, and with further advancement in software and materials, mechanical movement will become more natural, and simply a matter of ‘thinking about it’.

Joints and Bones:

Although we can’t call these mechanical, we should mention that there are already many procedures that allow for joint replacements (hips, knee, shoulder, disc) or bone replacements with varying material alternatives than biological structures.

The first 3D-printed skull, lower jaw, upper jaw or parts of the skull, and pelvis, each made of strong materials, have already been transplanted to some patients.

These examples are just a sampling, but there may already be ‘mechanical’, non-biological alternatives for all joint and bone replacement needs.

ORGANS:

To replace the functionality of a biological organ with a mechanical device is far more complex and sophisticated than replacing limbs, since organ functionality often means the difference between life and death.  One can live without legs and arms, but not without a heart or a liver.

KIDNEY

Your kidneys’ main function is to act as a filtration system for your blood; removing toxins from your body by transferring them to the bladder, where they are later evacuated from the body during urination.  Kidney failure occurs when the kidneys lose the ability to sufficiently filter waste from the blood.  Many factors can interfere with kidney health and function, such as toxic exposure to environmental pollutants and chemical food preservatives, certain diseases and ailments, and physical kidney damage.  If your kidneys cannot manage their task, your body becomes overloaded with toxins.  Left untreated, this can lead to kidney failure and may result in death.(source)

People can live without one kidney, but not without both.  Over one million people die from kidney failure every year, while around 1.4 million are currently helped by an artificial kidney called a dialysis machine.(source)  However, that also means keeping the patient connected to a huge machine without the ability to move or have a normal life.  But now, a cup-of-coffee sized device has been invented and is nearly ready to be tested in patients.  It is designed to last for the life of the recipient and should be ready for trial in 2017.(source)(source) https://www.youtube.com/watch?v=gtsHDY5S21A

Another small implantable artificial kidney is set to be tested in human trials in 5 - 6 years, according to this company.  There are other mechanical replacements for kidneys that are not as small, but have already shown success in their first clinical trials.  These are not designed for implant, but for wearing them on a belt, allowing patients much more mobility and a more normal life compared to dialysis.(source)

A mechanical replacement for kidney function has been available for many years.  The challenge now is to make it smaller and smaller.

LIVER

Nearly all of ‘the good stuff’ in what you eat and drink eventually passes through your liver, an organ that performs over 500 different functions.  Although the liver is the only human organ that can fully regenerate from as little of 25% of it, incidences of liver failure can still occur.

One interesting fact is that because the liver performs many complex functions in and for the body, there is no properly tested mechanical device to replace its functions, at least so far.   Although clinical trials have already begun for such devices, their potential is yet to be confirmed.(source)  However, these devices make use of actual liver cells contained within devices that are externally connected to the human body to achieve liver functions, so it may be more accurate to regard these as biological devices, rather than mechanical ones.

https://www.youtube.com/watch?v=XiuyOkhLugU

PANCREAS

The pancreas’ main function is the production of insulin, which then control the levels of glucose (sugars) in the blood.  When this fails (Type 1) or becomes reduced (Type 2), there is more glucose in the bloodstream than normal, and the result is a serious condition known as diabetes.  All Type 1 and some Type 2 diabetes cases require insulin intake, affecting 371 million people worldwide, and that number is expected to rise to 552 million by 2030.  Although humans can live without a pancreas, they must take insulin and pills that contain digestive enzymes for the rest of their lives in order to survive that.(source)

There is a new mechanical device designed to control the distribution of synthetic insulin in an automated way, and it looks very promising after the first clinical trial, keeping subjects within a safe blood glucose range for 80 percent or more of the time.(source)

https://www.youtube.com/watch?v=kWurrpn2s64

But there is also one device that has no mechanical parts, using a gel that isolates a reservoir of insulin.  The gel hardens and softens in real-time response to fluctuating glucose levels within the body, allowing insulin to be released from the reservoir precisely when needed.  Human trials of this pump are due to commence in 2016.(source)

SPLEEN

The spleen is another organ that humans can live without, as the liver would take over many of its functions.  However, the body would then lose some of its ability to fight infections.(source)

The spleen’s function is to keep the blood ‘clean’ of toxins.  A mechanical device can do this today, as it’s able to provide the basic functionalities that a spleen provides to the body by eliminating the vast majority of infectious ‘bugs’ from blood (bacteria and fungi).  It can clean all of your blood in about 5 hours, although it’s not a portable device and you would need to be hospitalized for the duration of the procedure.  But don’t forget, you can live without a spleen.(source)

DIGESTIVE SYSTEM

Can we replace the human stomach, small intestine and large intestine (basically most of the digestive system) with a mechanical one?  Not really, but there are mechanical models of the human digestive system which mimic the ‘real’ thing quite well.

In order for you to digest food, there is a series of events that have to take place: from the saliva that mixes up with the food, mastication (chewing into smaller bits) and muscular contractions (moving it from one place to another), to the stomach’s acid and bacteria in the gut (intestines), and eventually, the transportation of ‘good stuff’ from the broken-down food into the bloodstream.

There are a few teams of engineers around the world that have built mechanical models of the entire digestive system.  These are generally used for drug testing, and more, but there is also a robot that can actually digest food and extract energy from it for mechanical movement.  It does that with the help of bacteria and before it suffered a non-related mechanical problem, it was able to ‘survive’ for 7 continuous days by collecting and digesting food.(source) https://www.youtube.com/watch?v=ooGTNpZKAZY

Could such a system be used in humans to replace their entire digestive system?  I doubt it, but the interesting fact about humans is that they can basically survive without any parts of the digestive system except the small intestine, and even the small intestine is still functional at about 19% of its total length.  You would have to be fed intravenously if you had no functional small intestine.(source)

So far, there is no mechanical alternative for the human digestive system, but perhaps other non-mechanical and biological alternatives exist, as we will discuss in the next part of this book.

LUNGS

The function of the lungs is to transport molecules that are ‘good’ for us (oxygen) from the atmosphere to the blood, and take the ‘bad’ molecules (carbon dioxide) from our bloodstream and exhale them out into the atmosphere.  You can live with just one lung, and without both for about 30 seconds.  That is a dark joke, but do not despair.  There are machines that can keep you alive even if both of your lungs fail.

Lung diseases are the third leading cause of death, with over 3 million deaths a year and over 329 million people affected by various lung diseases worldwide.(source)

There are several technologies that can replace some of the lungs’ functions for a short period of time: hours or days, in cases of some particular surgeries where the patient’s lungs are not functional, or for a period of months for patients waiting for a lung transplant.(source)  These are usually big, external machines that are only efficient when they are properly monitored and the patient is connected to them in a hospital.  The most time that anyone has lived with such an artificial lung was for 5 months.(source)

So how about a real replacement for the lung; one that is small and can do the job without the patient being immobilized to a bed?  There are several prototypes already.  One is called Biolung, which is a soda-can sized device that uses ‘heart power’ to pump blood into its chamber where oxygen and carbon dioxide are exchanged across a plastic membrane.  The oxygen-rich blood then returns to the body.  The device is designed for implant and has no moving parts.  Biolung has been tested in sheep, resulting in better survival rates and less lung injury than a conventional ventilator.  It is expected to be tested in humans about 2 years from now.(source)  This device isn’t designed for long term use, however.  It’s only intended for a couple of months use by patients awaiting a lung transplant, but it is an important piece of technology due to its small size and ability to be implanted within the patient.

Another team is working on a years-long solution for mechanical device lung replacement.  They’ve been working on this device for the past 20 years and have recently received a four-year, $2.4 million grant from the National Institutes of Health (NIH) to support research and development for the artificial lung.  They say that such devices could be in use within the next 5 - 10 years.(source)  The ‘downside’, if there can be one in a situation where your life depends on such a device, is that while it allows for certain mobility and use from home (not being hospitalized), this kind of device still has to be closely monitored by doctors and is still unable to support the mobility one has with biological lungs.(video)

AmbuLung is designed with all of these flaws in mind and the team behind it want to create a fully functional lung that allows normal mobility for patients over long-term use.  They started the project in 2012, and animal trials should be concluded by June of this year.  If all goes well, human trials will begin shortly after that.  However, they’re not just using mechanical parts for this.  To achieve this performance on such a small implantable scale, they also employ living cells within a design that is ‘mechanically and mathematically’ driven for optimizing the function of a new kind of device that, they say, may completely revolutionize artificial lung functionality.

HEART

The heart is the organ that pumps blood throughout our body, providing the total organism with oxygen and nutrients, while also assisting with the removal of metabolic wastes - substances left over from excretory processes which cannot be used by the organism (they are surplus or have lethal effect), and must therefore be excreted.  This includes nitrogen compounds, excess water, CO2, phosphates, sulfates, etc.).

As with any other organ, the heart comes with a predisposition for harmful mutations.  When genetic ‘errors’ occur, a human can be born with a non-standard heart structure; one that can result in either the death of the human or a variety of issues that the human must deal with for the rest of her/his life.  Environmental factors, such as various diseases or certain drugs that the mother has/takes, have been shown to correlate with numerous heart structure errors.  Even with a good heart, multiple issues can later arise with this organ.  These issues are so numerous and impactful that the number one cause of death in the world is heart failure.  It kills more than 17.3 million people every year.

Lucky for us, there are several artificial hearts out there that have already proven to not only completely replace the heart’s functions for a particular period of time, but there have been continuous steady improvements in artificial heart designs, providing better results over increasingly shorter periods of time.

Over the past 45 years, around 1,400 artificial hearts of 13 different designs have been implanted in heart failure patients.  By far, the most used model is SynCardia, with over 96% of the total models used.  Artificial hearts are mainly designed to be used as a temporary alternative until a ‘real’ heart becomes available for transplantation.  The longest period that anyone has lived with an artificial heart was four years.  One-third of those who currently use SynCardia have had it for more than a year.  There are people with artificial hearts who enjoy boxing, hiking, and other sports; living a relatively normal life, but often a more active one than those with a real heart.(source)

https://www.youtube.com/watch?v=qUtKe_jSoas

A new type of artificial heart has been specifically engineered for long-term use (5 - 10 years or more) or, given enough improvements, perhaps even permanent replacement.  BiVACOR is a small artificial heart designed to completely replace all biological heart functions.  Since it’s as small as a fist, it can also be used in children.  It has a single moving part and relies on magnetic levitation for precision, avoiding mechanical wear over time.  Due to its simplicity, it is much less prone to malfunctions.  BiVACOR was developed by a team of doctors and engineers and, so far, has been successfully tested on sheep and cows.  They are now raising money toward improvements and future clinical trials on humans.  The device could be ready for humans in 3 - 5 years.(source)(source)

BiVACOR and SynCardia both require the recipient to carry a battery pack that is currently about of the size of a toaster, but it pretty much provides them with all of the freedom of movement that a normal heart does.

SMALLER AND ‘DISPOSABLE’ ORGANS

As you can see, nearly all the main organs can be substituted with mechanical alternatives.  However, there are still parts like the reproductive system, skin, arterial and venous systems, and other smaller items that are not yet replaceable by non-biological mechanical devices.  That may be due to there being much less need for it, since these are not normally life-threatening parts or because there are already plenty of treatments, cures, or other biological enhancements available for those parts.

To be fair, there are artificial valves, artificial veins for use in bypass situations, and other small ‘plumbing’ fixes with other materials and small mechanisms, but I do not see the advantage in trying to list all of them here, as there are so many types of procedures and alternatives.

MOUTH AND NOSE

Is there a way to replace the mouth with a mechanical one?  One that can chew, talk, swallow?  The mouth is more than that, though, as it communicates with the nose, it has a tongue, produces saliva, and is all about muscles, jaws and air flow.  It may be completely ‘unreasonable’ to think of the mouth as a separate part of the body that could be fully replaced with a mechanical or a biological alternative, but as you saw above, parts of the cranium can already be printed using various non-biological materials and implanted, while artificial teeth have existed for many decades now.

The trachea and esophagus, crucial for breathing and swallowing food and liquid, along with other small parts of the throat, are already in development for biological alternatives,. and we will talk about those in a separated article about bio-engineering.  However, there are mechanical alternatives for some of the functions of the larynx, a crucial part of the throat involved in breathing, sound production, and protecting the trachea against food aspiration.  In patients with larynx cancer, the entire larynx can be removed and, with the help of a device implanted in their throat, the person’s ability to speak can be restored.  You see, when the larynx is removed, the vocal cords (voice box) are also removed with it.  We talk by vibrating our vocal cords while exhaling air through them.  The resulting sound is then fine-tuned by tongue, lip and jaw movements, resulting in sound vibrations that we interpret as ‘speech’.

This voice restoration device is basically a vibrating piece of silicon that replaces the voice box  There are a few alternatives out there for vocal cord replacement, as showcased in these 3 videos - https://www.youtube.com/playlist?list=PLGW2heHH7L3Nvxd3L0npTDNfYAB6kMghF

Mechanical alternatives for the nose and tongue, organs that provide our smell and taste sensors, are being developed, but as a combination of biological and mechanical parts.  In addition, they are not being designed to replace biological human noses or tongues, but rather for other ‘bio-sensor’ applications.  There are already various treatments and biological solutions for restoring the loss of these senses, while these device designs are far more sensitive and better suited to applications other than human body implementation, which may never be done due to better bio-engineering alternatives for enhancing one’s senses.

EARS

Although external hearing aids (sound amplifiers) have been available for decades, there are newer devices called ‘cochlear implants’ that can actually ‘restore’ hearing to a certain degree, even in some completely deaf people.  What this means is that, even if the internal ear has become damaged or nonfunctional, hearing can still be revived by implanting this digital device, which then communicates directly with the auditory nerve, the only biological component that it needs to be intact.  It converts sound from outside to a digital format, and then transforms this digital data into specific stimulations to the auditory nerve which we humans interpret as hearing.

https://www.youtube.com/watch?v=zeg4qTnYOpw

There are a variety of cochlear implant devices available, allowing people renewed access to medium to higher frequency sounds.(source)

EYES AND THE BRAIN

Perhaps the sense we rely on most is sight.  When sight goes dark, it completely changes the lifestyle of that human being.  Is it possible to replace our eyes with mechanical ones?

The challenge with replacing such a complex organ with a mechanical device is huge.  I want to try to fully explain why, or else you may not fully appreciate the challenges of developing a device for vision or, more interestingly, alternatives for such a device that may instead rely on sound or taste.

Color and interpretation.

There are 3 types of biological receptors within your eye that are suited to detecting only 3 particular light wavelengths: red, blue and green.  That comprises all of the light wavelengths we humans can directly detect - no more, no less - only 3!  So how is it that we can see so many colors?  Well, colors don’t actually ‘exist’, per se.  Color is to light wavelengths what sound is to vibrations.

Sound: Vibrations from a source can travel through a medium, such as air or water, and can be ‘felt’ by someone or something.  When someone vibrates their vocal cords, through air, and the vibrations reach our ears, we culturally interpret those vibrations as certain sounds (we might describe it as speech, music, rhymes, pleasant or not, etc.).  But those same vibrations, from the same object and through the same medium, can be interpreted in other ways that you may never have considered before.  One example of this is schlieren imaging.  This method allows for auditory vibrations to be visualized with photons, similar to how we see light wavelengths.  Vibration waves are not ‘heard’, but instead ‘visualised’.  If one claps his hands, you will hear the clap, but someone else can visualise it with a schlieren device.  Same event, same vibrations, different sensors - different interpretations.

https://www.youtube.com/watch?v=px3oVGXr4mo

Color: The same thing goes for how we see.  First, check out this short, animated video, because I ‘see’ no way to explain this further without its help. 😉 - https://www.youtube.com/watch?v=l8_fZPHasdo

So, colors are human concepts/words to describe how we humans perceive different light wavelengths, because wavelengths of light can be ‘sensed’ in many different ways, with many different sensors/senses and devices.  Example: here’s a photo.  You and I see a ‘green’ landfill, but there is at least one human who sees in grey and hears this.  There is no green for him.

Why?  Because some of his biological sensors are different and he can’t interpret light wavelengths as color.  Instead, he has a chip implanted in the back of his head with a digital sensor that converts light waves into sounds that he can hear.  He ‘hears’ light waves as you ‘see’ them.  Again, same photo, same light waves, different sensors - different interpretations.  You interpret them as green, while he hears that sound.

He cannot understand what you mean by color.  For him, the way one dresses ‘sounds’, not appears.  Watch this TED video with him explaining all this - http://www.ted.com/talks/neil_harbisson_i_listen_to_color?language=en#t-546948

An Android app has been developed to allow you to experience, in a way, what this guy experiences when he ‘sees’ the world.  The app uses your phone’s camera, converting the ‘colors’ it sees into sounds.

In this ‘sense’, you cannot explain to a blind person what color is, any more than explaining to a deaf person what music is, or any more than trying to understand what it’s like to feel the magnetic field of the earth for us, the ‘normal’ ones.  You can draw the planet’s magnetic field to represent it as a visual map, but that’s like drawing sound waves for a deaf person and expecting them to understand how those soundwaves ‘feel’, or asking a deaf person to look at sheet music and understand the song as you hear it.  So don’t be fooled into thinking that staring at a map of Earth’s magnetic field will help you understand how the field ‘feels’ to a bird that readily detects it.

We may never be able to help blind people ‘see’ the world as we do, because we all ‘see’ the world in different ways, while being blind for a long period of time and then suddenly detecting light waves would produce a different kind of interpretation for the brain.  This applies similarly to the sense of hearing or taste/smell, as well, since a life-long deaf person who gains the hearing sense will not understand language by its sounds.  He/she will not be able to talk on the phone right away, because he/she first needs to learn how to associate these noises that we are so familiar with, the spoken language, with the sign language and lip reading that they were accustomed to before.

So, to create a replica of the eye, you first must understand that the brain is doing most of the work when it comes to seeing. Once you do, you can invent devices that ‘see’ via sound or other means, as I will exemplify.

To replicate what the eye does.

Sensing lightwaves: Different parts of the eye can become damaged, non-functional, so different methods are needed for restoring ‘sight’.  Imagine the mechanism of sight as a complex set of sensors and wires, each having its own function within the system.  If one of these wires or sensors stops working, there are mechanical solutions for replacing at least some of their functionality to make the system work again.  As an example, if the light sensors inside your eye no longer work, or are missing, but the entire system from them to the brain works, then the challenge would be to replace these defective biological sensors with a device that simulates their functionality by connecting the light from outside with the rest of your biological system.

One way this is currently done is through a small video camera that ‘sees’ the world, and transmits wireless signals to a small chip that replaces the light biological sensors.  The video camera basically communicates with the electronical device implanted inside the eye, which then activates the rest of the biological system for vision.  There are limitations to this approach, in that one basically ‘sees’ variations of light and dark.  It’s not as vivid as ‘normal’ eyes see, and the person will need to learn how to decode them to be able to use this new ‘sight sense’.

https://www.youtube.com/watch?v=CiyGOUHD2nI

This type of device worked in two-thirds of the blind patients that participated in clinical trials, and some of the patients who could finally see were even able to read letters.(source)

There are more examples of replacing various parts of the visual system, and you can read about those in more detail here and here.  All of them produce, at most, a grey pixelated image, providing a system for formerly blind humans to distinguish between dark and light, with nuances in between.  It’s not close to how biological eyes work, but is still quite remarkable, considering how complex our vision sense is.

But what if you bypass the entire sight system and connect devices directly to the brain?  Well, this can be done, too.  New technologies can connect a video camera directly to ‘electronical devices’ implanted in the brain's visual cortex, enabling people to ‘see’ without any part of the biological system for sight.  Clinical trials for this technology are expected to begin in a year or so.(source 1, 2)

All of these lightwave interpreting technologies are rather similar, in the sense that they collect light waves and convert them into signals that the brain interprets as light and dark regions, that can then be learned to be differentiated into separated forms and shapes.

You can even ‘see’ with your tongue, highlighting how ‘seeing’ is actually a process that the brain creates while being stimulated by other organs, like the tongue in this case.  With this technology, a camera detects lightwaves and transforms them into an electrical pattern that is sensed by the tongue through a device that you need to keep inside your mouth.  Although this device does not connect with the ‘visual’ part of your brain, it allows you to convert lightwaves into patterns that you can feel, so you are basically ‘seeing’ with your tongue.(source)

https://www.youtube.com/watch?v=xNkw28fz9u0

Similarly, blind people can ‘see’ with sound, not like the guy who can differentiate colors with sound, but in a more complex way, allowing blind people to interpret lightwaves via different sound types.  The way the sound is constructed, from tone to duration, creates a sort of alphabet and a ‘visual’ description of the world.  This sound alphabet is then used to convert what a video camera sees into sounds that can be perceived and understood by the blind.  The process is complex and extremely interesting, as it was shown how the people using this technology had the ‘visual’ part of their brains activated when they imagined the scene in front of them.  The technology works so well that blind people can even distinguish facial emotions, as seen in this TED presentation - https://www.youtube.com/watch?v=jVBp2nDmg7E

So, the brain may be more of a ‘task’ organ than a sensing one, and the task of ‘seeing’ is basically interpreting, in a particular way, inputs from different organs, such as the tongue, ears, or a direct connection of electronic sensors to the brain.

When we go about attempts to restore ‘vision’ through mechanical devices, we must understand why this is such a complex task and why there might be other alternatives for ‘seeing’.

Many motor and sensory ‘achievements’ of the brain can already be fine tuned or restored, as we have seen how the brain can ‘see’ by being connected directly to an electronic device that bypasses biological stimulation.  A motor function example could be Parkinson disease, in which people experience involuntary movements that even make it difficult to walk.  When implanted, thin pieces of metal that release a tiny amount of energy into the brain can basically ‘get rid of’ Parkinson’s specific involuntary movements, as showcased in this short documentary - https://www.youtube.com/watch?v=zCwhBsdHIV0

There are many brain implant devices aimed at restoring normal body/brain functions, as you can read in more detail on Wikipedia, but all of them are either sensory or motor-related.  However, there are other brain functions, like the encoding of memory, that can be restored or repaired with the help of electronic devices.  This is a new field of prostheses where the focus is on the brain’s cognitive functions (basically thinking) with the aim of replacing damaged neurons with electrical devices that can perform some of their tasks.  There are only animal trials, so far, for the technologies that I am going to highlight, but they are worth mentioning as this may open completely new doors as to how we can ‘fix’ the brain, or even enhance its functions.

In 1953, a patient by the name of Henry Molaison underwent a surgical procedure to alleviate epileptic seizures.  The procedure partially destroyed the part of his brain that we call “hippocampus”.  The result was reduced epileptic seizures, but something unexpected also happened: Henry could no longer form long-term memories.  As an interesting fact, almost all brain functions were discovered by similar situations, where people with a damaged brain exhibited different symptoms or were impaired in different ways.  I recommend this BBC documentary that looks at the history of such ‘random’ discoveries.

Thus, the Henry Molaiso’s life was basically destroyed by the surgery, while it helped doctors better understand what that part of the brain does.

We now know that the hippocampus is the first area of the brain that is affected in people with Alzheimer’s disease, which makes people unable to form or retain long-term memories, among other impairments.

The hippocampus is basically a bunch of neurons that, to simplify it a lot, receive and transmit electronic signals from one part of the brain to another.  A team of scientists analyzed these signals for years to develop computational models that can understand and replicate what outputs the hippocampus sends out for a particular input.  Basically, if this series of letters and numbers (34vfmf843) goes into the hippocampus, then this series (99800uuioo) is the output, which the hippocampus transmits to other parts of the brain.  In that sense, they understood how to decrypt and encrypt these signals so that, in theory, they could build a tiny device that can take inputs and properly output them further into the brain’s system, replacing what the parts of the hippocampus once did.

They started to experiment with living neurons, and it worked.  The tiny devices were able to replicate parts of the hippocampus.  They then went further and tested it in mice.  They trained the mice to press a lever for a reward, in a way that the hippocampus was actively involved in performing the task.  They recorded the hippocampus activity and the signals it receives and transmits.  They then injected a drug into the mice to impair some of their hippocampal function and, upon re-testing, observed that the mice performed at only 50% of their former accuracy, which is as good as random.  However, when they implanted these tiny devices into the mice brains to simulate their own hippocampus functions while on the drug, the mice performed almost as well as they had before receiving the drug.  If you have the time, you can read the entire study here.

They repeated the study with monkeys, this time for the prefrontal cortex area of the brain, and impaired short memory functions in this region to then replace that functionality with these tiny computational devices and observe the same results as in the mice.  The entire study can be read here.

Although this has not been tested on humans yet, clinical trials on humans are expected to begin soon.  This approach is very promising for dealing with diseases like Alzheimer's and other memory-related diseases, as well as for providing significant insight on how some of the brain’s functions work.  We may eventually learn how to replace many other damaged brain functions with mechanical devices.  How far will this go, I have no idea, but I suppose no one does.

By replacing parts of the human body with ‘mechanical’ and/or electronic devices, we can not only significantly improve the functionality of numerous human parts, but also reduce immense pressures on current organ transplant systems where, instead of relying on ‘borrowing’ parts from other people (mostly dead ones), which can be rejected by the recipient’s body, we are becoming better able to substitute them with mechanical alternatives, thus moving closer to satisfying the huge demand for replacement parts by people suffering without them.

One very important thing to mention is that, even while these solutions exist and are readily available, many people are still allowed to die within today's monetary system, just because they cannot afford them.  It still requires a whole lot of silly ‘pieces of paper’ to have your life saved.

I wonder how overall development of these mechanical alternatives for body parts would increase in a world where research and development is no longer ‘impaired’ by money and where the primary drive for people evolves into the well-being of humans and the environment.

In the first parts of this book, we discussed what mechanical replacements exist for the human body.  Here, we will look beyond the idea of ‘fixing’ humans with technology, by looking at extending their capabilities.

Cell phones, clothes, the internet, air conditioning, cars, buildings, shoes, knifes, refrigerators, telescopes, microscopes, various uses of nanotechnology, biotechnology, and all other fields of science provide enhancements for us humans, as we become better able to see farther and deeper, to analyze the world’s structures and forces that we are not able to detect or measure with our senses, to protect ourselves from harmful external and internal factors, and more.

CHART

We highlighted many technologies in our AA WORLD book, showing how we could make far better use of them than we do today, but now we will focus on technologies that allow us to improve our biological abilities, exceeding what our DNA coded for us.  This book’s focus is specifically on ‘machines’ that enhance our existing biology, while the next issue’s installment will focus extensively on physically ‘manipulating’ our biology.

Of course it’s hard to define exactly what I mean by ‘enhancing biology’, as pretty much all of the technologies that we have presented so far manage this in one way or another.  So let’s look at two major technologies/ideas that will enhance human beings’ biology: nanobots and new senses.  These approaches are not only about allowing us to be healthier and to sense the world in new ways but, as you will see, also how they may significantly change the way we communicate and understand the world.

Nanobots:

You have probably heard of ‘nanobots’, but what are they and do they really exist?

The idea of tiny ‘robots’ may project a serious misunderstanding of what these ‘things’ are, so I’ll try to clarify it here.  The human body, as we have discussed in recent articles, is made up of tiny structures that we call molecules and relies heavily on combinations of these ‘shapes’ (molecules) to perform different kinds of functions.  As described in our Earth book, drugs are nothing more than specifically shaped molecules that have been found to be able to bind with specific molecules within our body to ‘fix’ it.  They are like keys that unlock specific ‘doors’.  But the way that medicine is currently used is more like trying to unlock a real door by throwing millions or billions of keys at your apartment, hoping that one will hit the door’s lock and open it.  It works, to a degree, only because of the massive number of keys you throw at the issue, but these keys can also damage other ‘things’.  As an example, if you have a specific key that can unlock the self-destruction mechanism in a cancerous cell, then it is very risky to dump billions of those keys into a human body, as they may very well kill many of the healthy ones as well.

Now, here comes the nanobot.  A nanobot is nothing more than a bunch of molecules, much like drug molecules or the molecules that form your DNA, that are smartly assembled by humans into specific shapes, similar to how you might create a 3D model, and their roles are a ‘mechanical’ one.

Here are the basics of how one is built:

Typical DNA is composed of two strands bound to each other within a special shape (double helix), where the connectors on one side (strand) match with those of the other side, somewhat similar to a zipper.  If you start with just one side of a zipper, and then create and add smaller parts of other half-zippers that only match some positions/parts of the first half-zipper, you can make the entire first long piece of half DNA change it’s shape any way you want to.

Here’s an animation with the process - https://www.youtube.com/watch?v=5yH5LTXxFzk

These are real images of real structures made entirely out of DNA and using the method I just described above.

We also recommend that you watch this TEDtalk video to better understand how this works, as it is a very interesting process.

http://www.ted.com/talks/paul_rothemund_casts_a_spell_with_dna

Today, they are able to make many different tiny molecular shapes that, because of their form, can perform many functions.  To keep to the same example with the cancerous cells, if you are able to place “cell killer” keys inside of a ‘cage’, and then design this ‘cage’ to open only when it comes in direct contact with a cancerous cell, then you can deliver the cargo (the drug/key) only to cancerous cells throughout a body, without causing any harm to healthy cells.  That cage is a nanobot.  So, instead of throwing billions of keys at an apartment to get one to unlock the door while the others cause damage the apartment, imagine all of those keys wrapped inside soft tiny boxes that cannot damage the apartment, and these boxes only open and release the key when they make direct contact with the door lock.  This way, you will not damage the apartment while benefitting from a much more exact delivery system.

This is not a theory.  This is now happening ‘in the lab’ with animal testing, where they are already able to ‘build bridges’ for tissue growth (for example, for spinal cord injuries), detect various types of viruses/bacterium, delivering many kinds of drugs, or actually target cancerous cells with success (they can identify 12 types of tumors).  Real ‘photos’ of these nanobots - http://1.bp.blogspot.com/-gTr1-FHej9A/VJ91n9AuwMI/AAAAAAAA5eA/4BUy4TrR02o/s1600/screenshot-www.youtube.com%2B2014-12-27%2B19-12-24.png

They can even be made to ‘cooperate’ with each other to behave more like a swarm.  It’s made possible by their lego-like behavior, so that when one combines with another, then one or both of them may ‘open up’ or otherwise change their combined shape toward a specific outcome.  It can also be compared to a computer program, as they can be built to load an ensemble of related drugs inside many of these boxes for programmed release, all based on specific situations that may be found in the body.  So, if they find a particular situation/disease that requires 5 different drugs to be administered in a specific order and over specific time intervals, then, by the way the containers assemble after being triggered by the encountered situation, they can open their ‘cages’ in a particular way to release the 5 required drugs, as needed, rather than all at once.

Watch this video to better understand this - https://www.youtube.com/watch?v=aA-H0L3eEo0

If the human body can be mapped by the unique molecules that are found in each individual area of the human body, then these nanobots can use that map to better target specific zones.  It is also now possible to activate or deactivate these nanobots using remote control, which significantly adds to their capabilities.  Watch this TED talk for additional information about all of this.

https://www.youtube.com/watch?v=-5KLTonB3Pg

The same researchers recently announced that a human trial is due to begin very soon for treating leukemia (a form of blood cell cancer).(source)

While these ‘nanorobots’ are essentially various molecular shapes that bind and lock-unlock when in contact with certain targeted molecules inside the human body, and their reactions are continuously being made made more sophisticated, they still represent a ‘shoot in all directions’ solution, as they must be injected into the body, perhaps by the billions.  They are ‘able’ to bind where they are intended to bind, in large part due to the presence of their large numbers moving through the body and increasing their chances of locating all of the existing targets that require their ‘treatment’.

The research and promises of these tiny structures is fantastic, but there is still much more to ‘nanobots’.  Another approach is to develop nanobots that are more than simple molecular shapes; more complex and better controlled from ‘outside’ so they can perform more like the ‘real’, full size robots that we are used to.  There are already a few examples, but keep in mind that, although they may seem simple while still performing relatively primitive tasks, this research is much more about continually expanding the future capabilities of these nanorobots and how humans can already manipulate and control such extraordinarily tiny devices.

This tiny rocket-shaped ‘thing’ is 60 times larger than the molecular bots above, but this is actually a motor-based nanobot - perhaps the tiniest motor in the world.  It can spin extremely fast while being controlled by soundwaves and magnetism for rotational speed and overall movement.  It can also be coated with certain biochemicals that are then delivered according to the motor’s rotational speed, thus these bots can be controlled for how much medicine they ‘deliver’, and through magnetism they can control where these tiny nanobots go to deliver it.  They can also be made to target, for instance, cancerous cells, and then puncture/destroy them from outside, or also from inside the cell, where these nanobots can insert themselves and, by spinning at very high speed, they can literally ‘shred’ the cell’s interior.  These nanobots can also move autonomously and, perhaps in the near future, be able to find and automatically cure all kinds of cell-related diseases.  Even more interestingly, they plan to focus on making these tiny rocket-shaped robots assemble themselves into bigger structures for performing more complex tasks.(source) - video

Here is some real footage of these nanobots in action: PLAYLIST

Other mobile nanobots currently exist, but these are only being tested for their movement within the human body, but without any specific application for them.(source)

Some fascinating research is also going toward decrypting the ‘natural’ healing properties of the human body and now some of these functions are known to be connected with the nervous system.  By introducing tiny nanorobots in key locations, they can now tweak some parts of the nervous system to ‘cure’ some diseases.  So, instead of relying on ingested drugs that, due to their huge number spreading throughout the body, eventually find themselves at the right spot, and instead of nanobots that can deliver drugs to more targeted spots, this new approach tweaks the body to create and deliver the proper ‘drugs’ (molecules) itself to proper locations.  This is a very new approach, but it has already been tested in several patients and seems to already be working for a handful of symptoms/diseases.(source)

Explanation video - https://www.youtube.com/watch?v=NhXtSy-Ccvg

A hundred or so years ago, human beings started building up a better understanding of cancer, deciding that the best way to remove cancer would be through surgery.  What they quickly realized is that, in almost all cases, the cancer reappeared after the surgery.  As a result, they concluded that they would have to cut out even more bits of the ‘infected’ human parts to better ensure removal of all of the cancer.  With breast cancer, for example, they often ended up removing huge chunks of the pectoral and arm muscles, leaving the women with parts of their bodies completely non-functional.  The procedure was gruesome and inefficient.(source)

Today, we use similar methods for dealing with cancer, except that the scalpel is more and more replaced with ‘toxins’ (chemotherapy) or ‘radiation’.  Chemotherapy is a method of injecting substances that kill cancerous cells into the body, but the problem is that it cannot always differentiate between them and normal cells and, therefore, destroys healthy cells as well.(video explanation)  Radiation treatments shoot atoms or particles that are smaller than atoms at the cancer cells from an external device.  While it boasts much higher precision than chemo, it cannot target cancerous cells that are widely spread throughout the body (metastasis).(video explanation)  These approaches are merely more precise versions of ‘old-fashioned surgery’, since they also affect other organs or are still quite imprecise at removing all cancerous cells.

But nanobots change all of this, as they are the perfect ‘surgeons’; targeting only what you want them to target, and managing that goal throughout the entire body.  Imagine having these small robots inside you, responding to and curing the earliest stages of various diseases without you even aware of it.  This ‘continual state of near-optimum health’ highlights the power of these tiny bots: it will enhance our biology, making us more resistant to diseases (and perhaps immune to most).

New Senses:

Humans have 5 senses, right?  Well, no.  Humans can sense the world in many different ways, through many various inputs.  Skin, tongue and nose, ears, hair follicles, eyes, pain receptors, pulmonary stretch receptors, stretch receptors in the gastrointestinal tract and many other receptors allow us to ‘feel’ different ‘things’: temperature, balance, lightwaves, soundwaves, certain chemical reactions, vibrations, the need to pee, eat, sneeze; we can feel dizzy because of certain chemicals or visual/auditory cues, sick, cold, hot, and so on.

There’s no proper way of defining and categorizing a ‘sense’, since sometimes many of them function together as one, or one cannot be fully isolated and/or understood.

When I first tried seafood and a friend asked me what it tasted like, I said “chicken”.  How else could I describe the taste?  If I had used chemistry and biology to describe the taste to him, it would have been extremely complicated (perhaps completely unrealistic), but since we both have the same kind of taste receptors and we had both tasted chicken before, we could relate it to that experience.  The way we ‘sense’ the world, while certainly subjective, seems to be the most powerful communication device and the best tool for us humans to understand the world in and around us.

I can use a compass to guide myself around on the planet, or I can study the physics of the magnetic field of the Earth, but it would become far easier for me to have a belt around my waist that allows me to basically ‘sense’ Earth’s magnetic field through tiny electrical impulses or vibrations to my skin that indicate, for example, the direction and distance to the North Pole.  That would help me make sense of it far more completely than with the aid of a simple compass and/or strong academic understanding of the physics behind it.  I watched a documentary many years ago showing how they had tested such a belt, and it proved to be very efficient in allowing a person to better understand his/her position in space, while the subjects’ overall orientation improved significantly.  Similarly, tiny electronics are now being developed that can act as a sensor of magnetic fields (source).

As we’ve shown earlier in this book, the brain is the ‘task’ organ while the rest: ears, eyes, skin, etc., are the ‘sensing’ organs.  Therefore, adding a new sense, or a set of senses, should not be a difficult task for the brain to adopt.  If you think about it, so many creatures have similar brains with ours and many of them have very different kinds of senses.  Some are very sensitive to heat, some are able to see in low light, sense smell thousands of times better than us, detect lightwaves well outside of our natural range, sense the magnetic field of the Earth, enjoy 360 degree vision, and so on.  All of these tasks, although sensed by different kinds of organs, are managed by their ‘neurons’ (brain).

So, can we add new ‘senses’ to our own neurons?  Sure we can.  We’ve already highlighted some expansions of our existing senses in the previous issue (hearing light, seeing sound, etc.).  Those were intended to replace some biological errors (blindness, for instance), so let’s take a closer look at some that can enhance our ability to sense.

Here is a girl who can sense earthquakes and, with precision, the speed of moving objects around her.  A sensor was implanted within her elbow that is connected to a network that monitors earthquakes around the world.  Whenever an earthquake occurs, she feels a vibration in her elbow, where the vibrational intensity relates to the quake’s intensity.  After some time, she was able to acclimate the new sense, ‘feeling’ the Earth’s quakes and their intensities as ‘naturally’ as we understand how chicken tastes.  That same girl also has sensors that have been added to the back of her head (and also in her earrings) that detect the speed of objects and, again, transfer that to her via vibrational patterns, allowing her to ‘feel’ the speed of objects.  For instance, instead of saying that a car is moving at 100 km/h (62 miles/hour) and a human at 5 km/h (3 miles/hour), this girl can ‘feel’ these speeds and understand the difference between them.  Of course, this is not suggesting that she can tell us exact measurements of speed, but it does provide a new way of understanding the world around us.  See, we mainly rely on just 2 senses: vision and sound.  We can look at the Moon and we might be aware that it is 384,400 km (238.60653782 miles) away, but we really have no idea how far away that distance really is.  We often try to relate incomprehensible things with other things that we are much more familiar with, such as: it would take 20 weeks to arrive at the Moon if you could drive there at an average highway speed.  We can relate to this because we drive cars on highways and we ‘experience’ days.  This comparative approach relies on a kind of ‘relational soup’ between experience (senses) and knowledge.

https://www.youtube.com/watch?v=UEffj-itNNM

So, imagine if we were able to just look at the Moon and ‘feel’ how far away it is.  Wouldn’t that provide a more accurate understanding for us, humans?  Imagine the same principle when we are traveling, feeling how close we are to the destination, not relying on written numbers and sounds that we may or may not be able to understand.  Such senses that can ‘feel’ distances and speeds would allow us to much better understand parts of the world we live in, without knowing the physics of them.  I know how it ‘feels’ to balance in a swing, but I would find it impossible to properly describe that to another human in ‘scientific’ terms, or for him to understand my attempts at describing it without both of us being able to feel that sensation.

Our entire life experience relies on our senses to function: we associate colors with different situations (hot or cold water, traffic lights, warnings & notifications, and so on).  We still rely mainly on bodily symptoms for detecting something wrong with it (nausea, fever, etc.); if we were all to live in a world without sound (as most deaf people do), we would find it much more difficult to function (no ‘music’, no auditory warnings for approaching cars, impending explosions, etc., no voice recognition or vocal inflections to help you determine the other person’s state of mind or excitement level during communications with them, and much, much more).

I went to eat something earlier today because I ‘felt’ hungry.  I took some pasta from the fridge and put it in a pot with cold water (I knew it was cold, as I could check it - I felt it).  I then put the pot on the stove to bring the water to a boil.  I ‘heard’ my phone ring, so I answered it.  I talked for 10 minutes with the person who called.  I closed the phone.  I went to the bathroom (I felt the need to pee 🙂 ).  I left the bathroom and felt steam (heat), which reminded me about the pasta.  I went to the kitchen and turned off the stove, since the water was boiling.  Once it cooled down, I ate the pasta.  I open doors in the house based on how I am used to opening doors, and not based on principles of physics (to apply a certain force); I eat because I feel the need to, not because of a formula that calculates my nutrients and recommends that I eat at specific times or in specific quantities; I close my eyes when there is too much sun, rather than because of any biological understanding of pupils, sun rays, etc., since that’s simply how my body reacts to what it feels; and I don’t go to pee because some smartphone app alerts me that my bladder is full and needs to be emptied so it can take on more ‘liquid’.  The way we dress, what we eat and when, what we pay attention to, the way we interpret the world, and many other aspects of life are all extremely connected to how we ‘sense’ all of it (cold, hungry, etc.).

Expanding our sensing abilities can dramatically improve our understanding of the world and make it easier for us to, basically, live.

Video - Hack Your Body To Have Superpowers

https://www.youtube.com/watch?v=lFtYgj5Lt6Q

However, adding various kinds of devices to detect movement, earthquakes, magnetic fields, temperature, and so on is not the entire story.  There now exists a more complex type of sensor apparatus, in the form of a vest, that works on the same principle of vibrations to the skin, but this time, the vest is designed to produce complex vibration sets in a way that its wearer can recognize spoken language and can ‘communicate’.  Because the vest has multiple vibration modules, it can create a huge variety of distinct patterns of vibrations, allowing a deaf person to associate these vibrations with distinct spoken words and, basically, understand ‘language’.  This is a huge advancement, because if a deaf guy can learn how to understand words (vibrations in the air) through this vest, as showcased in the video below, then a huge variety of more advanced scenarios can be imagined.  Watch the video here - https://www.youtube.com/watch?v=4c1lqFXHvqI

The variety of inputs that our brains can interpret and make sense of can be greatly improved and extended beyond our existing biological senses, and once you do that, you can connect them with other sensors and big data.  Imagine ‘feeling’ when a virus outbreak is near you, so you can take appropriate measures; to be able to detect when toxic chemicals, undetectable by your biological sensors, are around you; to feel when you have entered a dangerous area -- for instance, a construction area where, instead of a visual sign that many may miss or are unable to translate from another language, you more simply ‘feel’ the need for heightened awareness; or imagine ‘feeling’ big data, such as the overall health status of a population, instead of having to gain such awareness through statistics; we can even imagine ways of feeling other’s pain, discomfort and level of happiness by decrypting and wirelessly transmitting their state through similar sensors, which may also provide a new way of explaining to a doctor or to a loved one how you ‘feel’.  The list is endless and, from navigation to sensing various ‘events’/forces/waves, to compressing complicated big data or more accurately recognizing distances, heights, etc. into easier to understand patterns of ‘senses’, we can become far superior than we are today at understanding the world we live in, and ourselves.

We already use lots of devices (smartphones, supercomputers, light detectors, the internet, and so on) to extend what we are, but perhaps these tools are extremely primitive, as they represent a limited conversion of the complex world of which we can only experience a tiny fraction, and designed for just a handful of limited senses that we use to interpret it (sight, hearing, smell, and a few others).

Coupling these new senses with nanobots may very well allow us to become vastly different from what we are today: diseases and other difficult problems may auto-‘fix’ inside our bodies without us even realizing it, and we will be able to experience the world in completely new ways.

In the near future, we’ll be able to look up at the Andromeda galaxy, the nearest one to our own, and see it in great detail via contact lenses that can stream live captures of powerful telescopes across many different lightwaves.  We could even share the feeling it gives us with others, and we will all ‘feel’ how far away it is, without knowing the distance in km or miles.  At the same time, tiny robots inside our body may be eliminating cancerous cells or any kind of disease before it can form, and without any need for our assistance or awareness.  Various sensors will allow us to feel more connected with the Earth and better guide ourselves while exploring it, while others may make us feel closer to each others and understand better how others feel, including how other animals may ‘feel’ the world.

We must not forget that no matter what new senses can be successfully added to our biological bodies, they are just as subjective in their details as the biological ones.  Just as it is today, we may sense the same ‘expanded’ lightwaves in the future, but still interpret them differently.  We may have similar tasting sensors, but the foods we eat will taste differently due to cultural influences.  So, yes, these new senses will still be subjective, but more alike and easier to understand and relate to than mere words can manage today.  Also, consider that while we may be able to feel things like heat and pressure in many different ways today (damn useful in our day-to-day lives, but we need science to describe them), however acute and complex our senses become, our brains simply do not have the capacity for making total sense of all of the available information in an ‘objective’ and categorized way as clearly as we can through science.  But with advancements in ‘big data connectivity’, for example, maybe that won’t matter.

Who knows what we will become…

So far we have discussed “man” and “machine” as two separated entities.  But the fact remains that we are all made from the same building blocks, and those building blocks are all mechanical.  In this “Human-Machine” installment, I will try to show you why it is important to look at the human body as a ‘machine’ (which indeed it is), so that we can understand its parts and how they work together.  This view allows the potential to ‘fix’ and enhance bodies without any need of electronic or mechanical devices, but to instead ‘tweak’ the biological-mechanical ones for each desired purpose.  From 3D printing organs, to the ability to grow tissue, and even the process of creating a human, let’s go through them all.

When agriculture first emerged about 12,000 years ago, there were around 15 million humans on the planet.  Now consider this: that is 5 millions less than the population of New York.  Crazy, isn’t it?  But what’s even crazier is that the resulting rise in terms of ‘billions’ occurred at such a relatively rapid pace.  It only took about 12,000 years for humans to grow from 15 million to 1 billion (year 1804), but then only 123 more years to double, 33 years to reach 3 billion, and then around 12 years for every additional billion, reaching more than 7 billion humans at the present.(source)

Now comes the question: how much did Earth weight when there were only 15 million people, compared to the 7 billion it ‘hosts’ today?  Although humans weight very little compared to the entire Earth, would there be a measurable difference that they have brought to this?  The interesting answer is that this is a trick question, as the planet would weigh exactly the same!  Well…, almost.  Except for the dust (cosmic objects) that may have fallen onto Earth from space, and the ‘stuff’ that Earth may lose through the nuclear reactions at its core(source), all of the stuff you see -mountains, people, clothes, smartphones, candies, the dinosaurs that existed, cars, cats, trees, and so on- are all recycled atoms.  It’s the same massive handful of tiny atoms, just arranged in different ways to create all of this complexity.

Humans are indeed bits of other humans, bits of valleys, furniture, dogs, even feces.  What was once an atom that helped formed a dinosaur, may very well be part of your nose right now, or part of your spleen.

One interesting thing to keep in mind is that most of the elements (types of atoms) that make up your body can only be formed as a star explodes.  Only then!  This means that at least one star had to explode for you and I to be here.  As the song goes, we are stardust.

So, atoms make up cells, and although there are many kinds of cells out there, only two kinds of these cells make up you.  Meet your real parents: the egg and sperm cells.

The process of reproduction seems very complex, but the main thing that has to happen is for these two ‘parent’ cells to meet.  We may see it complicated because one of these tiny little structures needs to migrate from one body to another and they meet via a rather complex path.  In the same way that you can inject drug molecules or nanobots into the human body by the billions, ‘hoping’ that some of them will eventually reach their intended destination, the same approach is taken by sperm cells that are (normally) ‘injected’ via the penis ’syringe’ into the female’s body via the vagina.  There are typically hundreds of millions of such tiny sperm cells in a single ‘injection’, and because they move around like crazy, some of them may end up in the right place.  But even if an egg happens to also be in that place if/when they arrive there, only one of the sperm cells can combine with the egg cell to make up another human.  But keep in mind that this is just a very short and simplified description, as it’s actually more complicated than that.

The human female can produce a finite amount of such egg-cells(source).  About 400 of them, with each having the potential to become ‘new humans’ via combination with sperm cells.  So, if you are a female, you live long enough, most of your egg cells are ok for reproduction, and you are able to take in sperm cells at the right time (wherever/however you choose to seek them), then you could potentially produce around 400 human children.  Of course, it’s not really that simple.  As this animation shows, egg cells are only ‘released’ one at a time from their ‘shells’ (inside the female body) over about 35 years of a female’s lifetime.  Then consider that each released egg can only survive for roughly 24-48 hours after ’release’, and the average lifespan of a sperm cell that found its way into a female uterus is also about 24-48 hours.  So, although sperm cells are typically introduced into a female in huge quantities, they have a very narrow ‘window of opportunity’ to reach an egg and trigger the transformation.

Watch this animation to see how the ‘adventure’ of the sperm cell is mechanistic and based on ‘chances’, as most sperm cells end up trapped in different parts of the female’s body, some are even ‘killed’ by the female’s ‘immune system’, showing how there is no ‘purpose’ to the immune system or any such process, as they are only reactions to different stimuli.  It’s not that the immune system wants to protect one’s body.  If an ‘immune system’ is tuned to react to a specific molecular shape and it turns out that the sperm cells have shapes that are similar, then the immune system will ‘attack’ them, too.  Here’s the video - https://www.youtube.com/watch?v=BFrVmDgh4v4

Once the two ‘real parents’ meet and combine, all of the information for a new human is there.  These ‘new baby instructions’, containing contributions from the DNA of both parents, is complete and can already tell how tall the person will be, if they will have blue or green eyes, and even if they are predisposed to heart disease.  Then again, only a little over half of these egg-sperm structures manage to survive and mature.  To read more about how DNA from both parents combine to form new life, read our article on ‘evolution’.

Given all the variables involved - the tiny structures that have to meet and combine, with all of the complex roads they have to follow - it’s a marvelous wonder that humans (and other animals) are able to reproduce at all.  It comes down to multiplicity: the sheer number of possibilities that allows such unique and complex events to happen.

Saying that these two little creatures, the egg and sperm, are your ‘true’ parents is not an exaggeration.  Since most people are not fully aware of the complex process of reproduction, many people ‘project’ that the male and female contributors of the sperm and egg are the parents, and society contributes greatly to this misunderstanding. To help solidify this understanding, consider how we can take the egg and sperm from two humans, combine them in a lab, and then insert the resulting cell into a different female (surrogate) where the baby will develop.  We may eventually be able to manage this entirely within the lab, no longer needing a surrogate to carry out the process of development.  This shows how those two tiny structures are the ‘true’ parents of each one of us.  Of course, they carry bits of information from the contributing male and female humans, which is why the baby is physically similar to them.

https://www.youtube.com/watch?v=P27waC05Hdk

A human consists primarily of cells; about 37 trillion of them covering around 200 different types.  Some types of cells make up your heart, some others become your liver, and some are part of your blood.  As these cells develop and begin working together as a whole, they form into tissue, and tissue does the same to create organs.  And when organs ‘cooperate’, they make you, the human.

We can currently recognize four types of tissue: nervous, connective, epithelial, and muscular.

What this says is that there are some types of structures made of cells that have different properties.  For instance, ‘nervous’ tissue is what makes you ‘alive’, allowing you to respond and react.  If an insect lands on your arm, it will stimulate hair follicles that, in turn, stimulate nerve cell receptors in your arm (part of a complex system throughout your body).  That stimulation may be quickly transmitted to your brain (which is also a ‘nervous’ tissue type) and, based on a complex soup of past experiences (associative memory), you might ignore it, scream, observe its behavior or structure for a while, or whatever else you might do based on your upbringing.  But this function is also based on how your biology works.  The stimuli from the insect to your skin may not be transmitted to your brain and, instead, just go directly to the arm muscle beneath the insect, contracting it in a process that we call ‘unconscious’ (no brain thoughts involved).  An interesting thing about nerve cells is that they do a simple thing, but individually: they can only transmit one signal at a time, and only with the same strength and speed they received from the nerve cell that passed it to them, making them basically like a ‘simple repeater’.  So, imagine it as a “beep” sound being passed from nerve cell to nerve cell along a string of these cells, where their only task is to fully preserve the intensity of the volume and length of the “beep” (not change it to “beeeeeep”).

What is most relevant here is the ‘frequency’ of the signal transmitted.  A slow “beep ------- beep ------- beep” moving through the nerve cell string may indicate part of your body’s interpretation of a slight sensation (perhaps a light breeze on your face), while a more frequent “beep - beep - beep” signal may warn you of a more severe pain (maybe a hammer hitting your thumb instead of the nail).

This is similar to how a computer works, where each switch (transistor) can only manage a simple on-off function, but through the use of multiple switches through the entire system, it can render a photo, play a video, reproduce your voice, etc..  In a similar way, a nerve cell can contribute to creating a memory, initiating a sneeze, a laugh, a sensation of coldness or a simple twitch movement, all because of the complex system it is part of.  Because of their huge number and interconnections, simple core elements (cells, transistors) are able to work together to render complex actions.

‘Muscle’ tissue is what allows your arm to contract, your heart to pump blood, your ‘face’ to talk, or to smile.  If the nervous tissue is what allows for responses and reactions, muscular tissue is what makes movement possible.  When a computer physically opens an old-fashioned DVD-ROM compartiment, it has to rely on multiple moving parts, including springs and elastic bands, gears and pulleys, and several other plastic parts.  These parts make up the ‘musculature’ of a computer: You click a mouse button, which activates part of software, that then sends a signal to another part of the physical computer to perform an action.  This is very similar to how your body reacts via the nerve cells that then activate certain muscles, if the right buttons are ‘pushed’.

Tissues that form into organs need to be separated and encapsulated somehow to not ‘break apart’, ‘combine’ with other organs, and otherwise not be easily damaged.  So, a bunch of another type of cells, called ‘epithelial’, gives them structure and shape.  They coat many of your organs, and even the exterior of your body.  With support from some other cell types, the skin is mostly made up of this kind of tissue.  It’s this protective tissue that keeps organs somewhat separated.  For example, you don’t digest yourself because your stomach is coated with these types cells.  Since they coat most of what makes up your body, your interaction with the universe is basically managed through these types of cells.  Similarly, most parts of a computer are also isolated with all kinds of insulators, or else the energy would flow in ways that would ‘fry’ at least some of the computer’s parts.  These insulators are what allows different parts of a system to manage their own individual job and, ultimately, perform as a complete system.

The last type of cells is the ‘connective’ tissue which, well, connects all of the above together.  Your body machine needs a skeleton (frame) and something to connect all of its parts.  The connective tissue does that.  Stretch your skin, flex your nose, or feel the bones in your fingers and you will gain a sense of what this type of tissue does.  Without this connective tissue, you would be more like a pile of goo.  Actually when you cook meat you do that to destroy the connective tissue that holds it up, so you can chew it.  Without its structural components, from screws to metal bars, no computer is a computer.

Visit this website to see all of the types of tissues explained (with real ‘photos’).link

The human body is, well…, seriously complex, and made up of so many different kinds of cells that interact in so many ways.  Each of the four types of tissue described above have even more subcategories to learn about, so this is only a very small representation of what the human body does or what it is.  Check out this video illustrating just a tiny fraction of the life of microscopic events that happen inside everyone’s body all the time - https://www.youtube.com/watch?v=YdjERhTczAs

Here are some real photos of the the human body under the microscope:

http://creative.sulekha.com/amazing-pictures-of-human-body-cells-repost-after-downloading-pics_423039_blog

http://www.buzzfeed.com/natashaumer/this-is-what-the-human-body-really-looks-like-under-a-micros#.ra0JxXJ9A

http://www.popsci.com/new-book-looks-human-body-under-microscope?image=3

http://discovermagazine.com/galleries/2015/jan-feb/science-beautiful

The Crash Course on Anatomy is also highly recommended to better understand all these processes.  http://videoneat.com/lectures/3970/anatomy-physiology

Finally, we suggest these documentaries in order to gain a much stronger understanding of the massive ‘universe’ inside you, which may be even more complex than any ‘outside’ world that we might eventually find on distant planets - http://videoneat.com/biology/

Let’s now look at some ‘usefull’ stuff that we can do with these varied cells that make us, us.

Cells make up the foundation of humans; how we breath, breed, move, eat, hear, taste, and even think.  If you were to lose one leg, suddenly :o, not only would you not be able to walk anymore, but you would also leak essential fluids that would quickly end ‘you’.  A sneeze is nothing more than some tiny ‘things’ inside of you reacting to various stimuli.  When you close your eyelids because there is too much light, that is another mechanical reaction, triggered by the effects of the Sun’s rays.

So, how can we tweak our mechanical body to make it healthier?

I used to repair computers a few years ago and I enjoyed taking parts from different computers and assembling them in many different configurations.  But I quickly learned that some of these parts were not compatible with others.  Some CPUs (processors) do not ‘fit’ on some motherboards (the foundation of a computer system); even if they fit physically, some still cannot work with the rest of the system because of other incompatibilities (frequency, overheating due to a lack of a good heat dispersal system, and so on); even more interestingly, even if they fit and the system seems to work, you may later on realize that some parts do not work properly due to other less-obvious incompatibilities between them that you were not initially able to detect.  I once had a RAM memory disk (computer stuff) that was performing at half of its rated capacity because it could not communicate with the rest of the system very well.

You may be able to take a hand from a dead body and try to connect it to a living human, but if the tiny bits that connect all the tissues between the hand and the body do not match properly, you cannot attach it.  Even if you can attach it, the hand may not function properly.  A human is far more complex than a desktop computer, so the potential for incompatibility is much higher.  This is why organ transplants are not that efficient and pose many dangers.  But unlike a computer that cannot be that much improved or repaired without swapping out parts, a human body can be tweaked.

There are currently 3 promising methods/techniques toward achieving that:

  1. Creating building blocks (Transforming one cell into another and creating new custom cells)
  2. Manipulating building blocks (Manipulate cells’ DNA)
  3. Creating human parts from these building blocks (Assembling cells and manipulating their growth)
  1. Creating building blocks

Transforming one cell into another and creating new custom cells

This is the key, as it provides us with the many lego-like building blocks that we need.  Although we have many different types of cells in our bodies, each with different structures and performing different functions (liver cells, muscle cells, neurons, etc.), they all have the same DNA, which is uniquely yours.  Remember: the entirety of you is only made from two cells (a sperm and egg cell).  So then, how is it that you now have so many varied types of cells?  What differentiates a lung cell from a liver cell is how the DNA’s genes (parts of the DNA) are expressed (turned on or off).  While the same code is inside all cells, different parts of it activate in different ways to give different cells their unique shapes, and thus unique functionalities.  That’s really all it is.

There are some types of cells that can be found in ‘undeveloped’ humans (after a sperm and egg combine, and then form a few embryonic ‘stem cells) that you can extract, add them to a heart muscle, for example, and they will become heart cells.  These are ‘undefined’ cells that can become any kind of cell.  That’s how ‘magical’ it is.  What they become is regulated by what signals (chemicals) the cells eventually interact with (the environment).  So, if you put them in liver muscle, they become liver cells; in a lung, they become lung cells; and so on.  In this way, they can transform into any kind of cell, based on the ‘environment’ within which they find themselves placed.  It is similar to how a christian, a thief, a programmer, a football player or a scientist are each created by the environments that they are exposed to.  They start out (as babies) basically the same, and then differentiate based on what their total environment causes them to become.

Collecting stem cells from undeveloped ‘human creatures’ (embryos) is a bit tricky due to availability, and the fact that you ‘destroy’ a potential human in the process.  However, you can also find them inside your own body.  Your skin completely replaces itself every 30 days or so, made possible by stem cells inside you that transform into skin cells.  The same goes for your intestinal lining, liver, and other organs/tissues.  These stem cells are less potent than the embryonic type (cannot transform into as many cell types), but they are still a fantastic tool that people are already working with.  Keep these stem cells in mind, as they are perhaps the most important building blocks available to us because of their ability to ‘morph’ into any kind of cell.

Within your bone marrow, there is a type of stem cell that can produce/transform into red or white blood cells.  Mutations of these stem cells can occur and, if they give rise to many ‘mutant’ white cells, we call them ‘cancerous’.  This is because your body becomes unable to produce enough of the ‘good’ cells and you end up with more ‘mutant’ (non-functional cells) than normal healthy ones.

We can kill these cancerous cells with chemotherapy (substances), but that also kills some of the good stem cells that were producing the ‘good stuff’ for you, making this approach highly problematic.  However, we are now able to inject new stem cells into the bone marrow following chemotherapy treatments, and they start producing ‘good’ cells for your body.  This is similar to spraying pesticide on an insect-devastated crop of vegetables.  It kills the insects, and the vegetables, but you can then plant new seeds to produce vegetables again.  So, a stem cell is a kind of seed that can become any kind of cell, depending on where it’s ‘planted’.  Imagine having the same seed producing any kind of plant, the way that stem cells become any kind of cell.  That would be awesome and we just might someday be able to invent such a thing.

But here is another fantastic discovery: as I mentioned earlier, the cells inside your body share the same DNA, but expressed in different ways to create different types of cells.  What if you could tweak the DNA of a muscle cell, for example, and transform it back into a stem cell, so it can then be transformed again into becoming another kind of cell.  Well, it turns out that you can, and these ‘reverted’ stem cells are almost as potent as the embryonic ones.(source)  The best part?  As in the previous example of transplanting a hand, your body may reject stem cells from other sources (like the potent embryonic ones), but transforming your own cells into stem cells solves that problem.  As a result, scarcity of stem cells, along with the potential for rejection, is quickly becoming a thing of the past.

I also recommend this short Khan Academy video explaining more about stem cells, as this is such an important field of research that everyone should be aware of:

  1. Manipulating building blocks

Manipulating cell DNA

Maybe you’re thinking that’s the best trick that humans can manage.  It’s not!

A virus is a ‘biological’ nanobot that can infiltrate a cell and then replicate, thereby destroying the cell or editing it’s code (the DNA), which transforms the cell into a cancerous one.(source)  On the other hand, you can isolate and edit such a virus so when it infiltrates a cell, it then edits/modifies the cell’s DNA in an intentional way, for instance, to fix errors in that DNA or to create a specific new type of cell (give it new properties/functionality).

Here’s what they can do today: they take stem cells that make red blood cells from your bone marrow and put them in a ‘bag’ (container) - billions of them.  They then use an ‘edited’ virus to then edit these stem cells in a way that the stem cells become another kind of cell that can target specific diseases.  Then they inject these modified stem cells back into your bone marrow, allowing your body to begin producing these ‘mutant protector‘ cells.  In other words, your body becomes a medicine factory.  They did this for a very rare disease, where ‘fatty acids’ build up in one’s brain, killing the person before they reach the age of 10.  Once the body started to add these ‘mutant’ cells to the child’s bloodstream, they circulated through his brain, connected with these dangerous ‘fatty acids’, and reduced them, saving the child’s life.

It’s quite amazing that we can edit our own cells and insert them back into our bodies in order to become its own medicine producing factory.  I highly recommend this TED talk to learn more about this -

https://www.youtube.com/watch?v=Ez560GnkSrE

Scheme explanation:

  1. Extract blood stem cells
  2. Virus with payload
  3. Viruses are put in a bag with billions of blood stem cells
  4. New stem cells created
  5. New stem cells inserted into the bone marrow
  6. Mutated stem cells create new kind of cells
  1.  Creating human parts from these building blocks

Assembling cells and manipulating their growth

Now that you understand how important these techniques are, as they allow us to create new building blocks that can transform into basically any kind of cells we need, we can move on and look at methods of putting these cells together in more complex ways, controlling their assembly, and do some really useful things with them :).

One method is to basically 3D-print these cells in any kind of shape you may want.  This is already happening and, as an example, liver cells have been printed in this fashion.  They survive for many days in a special environment where they are fully functional.  So, imagine taking a sample of cells from one of your own organs, grow a bunch of them and then ‘print’ them into small samples (any shape you want), so that various drugs can be tested on them.  This is huge!  Why?  Animals have been traditionally used for drug testing over many hundreds of years, which looks extremely primitive compared to what it will become ‘the norm’ in a few years time.  Simply put, a mouse is not you.  Not even another human is you.  You are unique, so you require unique, personalized medicine.  So, taking a wide variety of primary cells from your body (liver, lung, heart, etc.) and printing them into three dimensional samples of them (replicating the real living environment they developed in), you can now test all kinds of treatments, over extended periods of time, on what is effectively your own body, quickly arriving at the best known treatment for your uniqueness, all while saving many other creatures from being subjected to all manners of testing, or even death.
https://www.youtube.com/watch?v=s3CiJ26YS_U

This also describes the main use of 3D-printed tissue (cells) today, as it is not yet feasible to print large chunks of tissue (such as a full-size liver or heart).  The issue is that cells require oxygen and nutrition, usually delivered by a native blood vessel system, but such a support system has not yet been integrated into this printing method.  I suspect it won’t be long before this becomes reality, because another method of assembling tissue is to grow it alongside structures that can be printed from materials other than cells.  For instance, you can 3D map a real blood vessel system, print it with polymers or other materials, and then grow cells around and inside it.  Damaged skin can be ‘fixed’, for example, with materials that can be printed and applied to one’s arm, making it possible for the person to regrow his/her own skin on them (stimulating cell growth) from his/her own cells.

Since the structure to be printed can take on, perhaps, near-infinite forms, then imagine the potential for growing cells along the ‘lines’ of any kind of structural form.  You might print a 3D scaffold of the heart and then grow heart tissues inside it.  Then again, you can already take a heart and wash it with detergent (really) in order to remove all of the cells and other ‘stuff’ inside, leaving you with a complete, already built scaffold (no need to 3D print), into which you can inject heart cells from anyone’s body and let them ‘do their thing’.  Voila, the end result is a new fully-functional heart.  You can even do this with a pig’s heart and it can become ‘yours’.  This field is still in its infancy, but is hugely promising.

https://www.youtube.com/watch?v=pd3TFB0wOI0

This one-hour talk on the topic of replacing/enhancing one’s biology goes into more details about all the technologies presented so far - https://www.youtube.com/watch?v=cULURpGU6y4

Tweaking these building blocks of ours, the cells, and then integrating them into our body, printing them into samples to better test and understand future treatments, and even creating scaffolds that can be populated with them to function as new replacement organs, is a huge advancement for human societies, because one of the most ‘problematic’ challenges over all of human history has been management of the health of humans.  Simply put, people often ‘break’, so they need frequent repairs.  These new technologies consume far less resources and energy and are far more advanced and focused than anything we’ve had before.  Plus, they hold the promise of ‘fixing’ almost anything that may go wrong with the human body.  Instead of looking for an organ donor for a transplant, we can just repair the broken one, or make or grow a direct replacement from your own cells.

Because it is such a complex part of the human body, you can't grow a fully-functional limb (or even a 10% functional one).  Then again, there may not be much relevance to that approach when you can grow the individual parts of a limb (or whatever body part).  So, if there are issues with certain types of cells inside an arm (muscle, blood vessels, bone, etc.), you can target-fix those.  But then consider that there are some animals that automatically regrow lost limbs or other complex parts of their bodies.  This is an even newer field of research to better understand how it might be applied to humans.  Perhaps one day we will even be able to do that.(source)

With the techniques that we have presented so far, humans have been able to build and manipulate a wide variety of human ‘parts’ (some not fully functional yet): bone tissue, liver tissue, heart cells, multilayered skin, kidneys, hearts, tracheas, ears, noses, vaginas, muscle tissue, thymus, lungs, tiny ‘brains’, bladder, blood vessels, tiny stomachs, and so much more

Having an identical biological ‘avatar’ of you, in the form of tissues and organs, can provide us with huge advantages for drug testing and treatment, as well as for body parts replacements and enhancements.

The main purpose of this part is to help you better understand the basics of this emerging medical approach.  It will become more and more visible over the next few years, and is such an important part of how medical care will be (or at least should be) done.  Every time you hear about 3D-printed or lab-grown organs, think about the fact that they are basically manipulating cells into tissues and, with the help of 3D-printed biodegradable structures or washed scaffolds, they then mold these cells into organs.  That is the basis of all of this: cells and how to orchestrate them.  I hope you can now better weigh these abundant news titles about human organs and have a better understanding of what we are capable of today, along with what we will likely be able to do in the near future.

There are also numerous recently ramped up research efforts going on to stop or even reverse aging, as the longer we live, the cells that form our bodies become less able to manage their functions.  There are more and more scientists emerging that see the various effects of aging on human health as ‘diseases’ that perhaps can be ‘cured’.  There is a lot of noise about ‘breakthroughs’ in this area of research these days, but perhaps it is too soon to draw more than speculations.  We may try to develop a separate article on this, to highlight the many ways that it would benefit our health to no longer ‘age’, as not all of them are ‘obvious’.

I also hope that you now recognize how ‘mechanical’ we really are.  We are made of tiny creatures that we call ‘cells’, which are all part of a massive ‘universe’ inside us, although we are that ‘universe’, as “it” is us.  Looking at it in this way, we are better able to understand how to improve it.

Perhaps tweaking our body-machine in this way will make today’s more typical mechanical and electronic devices (pacemakers, dialysis, crutches, etc.) look like primitive solutions.  Nevertheless, combining both approaches will definitely end the scarcity of organ transplants, as it eventually makes the transplant method obsolete.  

But then consider that if this happens in today’s profit-motivated world, it will take many dead bodies until these technologies become ‘important’ enough for our money-society to provide for all (and then at a price).

A major part of the society we try to promote through the TROM project is for the creation of abundance of goods and services to nurture a saner society.  As such, the overall scarcity of medical treatments and replacement organs is one of the main, if not ‘the main’, scarcities that we need to fix.  

So far  we have shown how machines are more capable than humans at many, many levels (from driving, to decision making, and in many instances, even at making sense of human language), but we have also shown how ‘machines’ are part of who we already are (we use cars and smartphones, many people have been provided mechanical body parts, and so on).  More importantly, we have showcased how this perceived separation between machines and humans is highly erroneous, as we are also machines (made up of tiny structures that work together similarly to how a computer/machine works).

Could any of this so-called ‘advanced’ stuff (robots, nanobots, mechanical spleens, digital ‘eyes’, telescopes, phones, printed organs, artificial DNA alterations, etc.) become harmful to us in a way that we cannot ‘fix’ or otherwise control it anymore?  Could we humans arrive at our end as a species (extinction) because we do not understand how to play with these ‘toys’?

This last part of this book will be huge, as we cannot explain it thoroughly enough without touching upon many different topics and give plenty of examples.  This last part focuses on Artificial Intelligence (which many seem to fear today), about what ‘random’ really means, and whether there is such a thing as ‘free will’.  These are rather difficult topics that are highly interconnected and extremely important for the future we head into, so it may take reading this article 2-3 times to fully absorb it all.  Have I scared you already?  If so, I’ll also tell you that it will be a fun read because I have some ‘mental’ games (challenges) for you to play with, we will try to build a robo-chicken to finally understand why it crossed the road, we will look at how today’s news and movies have a completely wrong understanding of A.I., why the concept of “free will” may be complete nonsense, and so much more.

Buckle-up and let’s get started!

A week or so ago, a friend and I buckled-up for a car ride to visit a place we’ve never visited before; somewhere on a mountain, yet close to the sea.  We found the place on google maps and set up a phone’s navigation system to guide us there.  On the map, it looked like a lighthouse at the edge of the mountain, and Google Maps showed us that we can reach that place with the car.  There were no warnings of any ‘dangerous’ roads, or anything like that.

So what happened?  Google created very dangerous directions for us to follow.  First, the road was so narrow that you could barely fit one car, even though it was marked as a two-way road.  Add to the mix many ‘sharp’ curves, trees all around providing very low visibility, and the fact that all of this was on the side of a mountain, and you have a perfect recipe for either falling off of the mountain or experiencing a head-on collision, and with no option for turning back.  The situation turned worse as the road was playing host to more and more ‘tiny rocks’ and becoming more and more inclined as we progressed.  The car’s tires and brakes did not manage that combination well at all, and we started slipping down a sloping ravine, even with the brakes to the floor.  In that direction, the road ended at the edge of a steep cliff, but we had no choice other than to ‘ride it out’ until the car, thankfully, stopped near the edge of the cliff.  We tried to drive back up, as there was no other way out, but the car couldn’t climb more than half of that slope before we started to slip down again, backwards, towards the cliff's edge.  Second try - full speed, lots of rocks shooting out from beneath the wheels, along with lots of dust and smoke.  It was like drifting near the edge of a mountain, where any sideway move, because of the slippery road, could throw you off the ledge and into the sea below.  Thankfully, we did manage to climb it and return back home, but it wasn’t a pretty experience.  The car smelled like melted plastic, with dust covering the interior, and our hearts pumping far too fast.

Did Google try to kill us?  Whose fault was it?  What if we slipped off of that road into the sea and died?  Should Google be responsible for it?  What if we were using a self-driving car and that car drove us directly into the sea, based on Google’s directions?

We’ll get back to that at the end.

When people say that they fear technology, or specifically ‘robots’, what are they really talking about?  Technology is a knife, a gun, a washing machine, a piece of furniture or a laptop.  I can kill with any of those by shooting or cutting someone, putting them in a washing machine or hitting them with the laptop :).  Although rare, a laptop could ‘explode’ due to extreme battery overheating, without anyone ‘animating’ it.  A knife (or even a stick) can become a lethal weapon if you slip and fall onto it.  Even a piece of furniture can become deadly if you fall and hit your head on it hard enough.  Almost all such technologies, tools, have the potential to be harmful: whether animated by humans, or just ‘reacting’ to different environmental factors (like the laptop exploding), or for someone to simply be in the ‘wrong’ place at the ‘wrong’ time.

It seems to me that the only reason to ever ‘fear’ machines/tools are these two:

  • Intentionality and Money
  • Unpredictability

1.Intentionality and Money

No matter how ‘stupid’ a machine is, humans can make it dangerous.  It is relatively easy today to attach a smartphone to a machine gun and program it to kill only people who smile.  Having a smartphone camera detect a ‘smiling’ face (something it’s already doing while you’re taking photos) and then follow your added programming to activate the trigger of a machine gun is not that complex.  No matter what technology/tool we talk about, humans can ‘intentionally’ make it dangerous towards others or the environment.

To ‘control’ the outcome of any tools we use, time, resources and energy must be dedicated to the development of safer tools and smarter approaches in dealing with them.  But today’s monetary race mentality impairs people from focusing enough on the tools they supervise.  If there’s not enough money allocated to monitor and very thoroughly test new technologies, then those technologies will be released to the world without proper testing, all because someone wants to make a business out of them before someone else does.  Money is also the primary factor behind the creation of many useless or dangerous tools: guns, drones used for killing people, spying A.I.’s, and so much more.  So, money is not only a limitation on the testing and managing of new technologies; it’s also a core driver for intentionally creating things that are very harmful.

  1. Unpredictability

No matter how educated people are, or how well-designed the environment is to not produce the abhorrent behavior that ‘animates’ tools to become dangerous, a piece of technology can still be dangerous when you cannot predict its ‘behavior’, its outcome.

Let’s first consider something simple: FIRE.

Fire may not seem like much of a tool to most of us now.  But while many may see illumination, furnaces, and stoves as tools today, they are just modern versions of what fire was used for not long ago.  When humans started to ‘tame’ fire, and then learned how to ‘make’ fire out of wooden sticks or via other methods, their lives and societies changed forever.  It became possible to cook, get warm, see in the dark, and even protect themselves from other animals.  That happened hundreds of thousands of years ago, so it’s interesting to note that fire was still at the center of ‘modern’ societies as recently as 100 or so years ago.  From purifying water (boiling) to illuminating streets, heat production and transforming its energy into mechanical power, it was a ‘must-have’ tool.  But it’s also true that fire has been responsible for innumerous villages being burnt to the ground, bringing about the death of millions, and many more injured and frightened.

150 or so years ago, humans started to understand tiny ‘electrons’ and how to use their movement to create energy for human use, ushering in light bulbs in place of fire-lamps, electric heaters instead of fireplaces, and so on.  Today, we have powerplugs, powerwalls, computers, refrigerators, microwaves, A/C’s and so many other electricity-fueled utility devices within the modern home.  Someone who lived 150 years ago would be terrified to learn that our ‘modern’ homes, especially those made entirely of wood, would be ‘plated’ with ‘invisible energy’ that could possibly start a fire inside the house.  Trying to explain to those people how to ‘tame’ power (this time ‘electrons’, rather than fire) for so many uses would cause them to think you’re crazy, as to them there would appear to be far too many risks involved with such ‘magic’.  While we still have the issue of houses, building complexes, or even entire villages or small cities that occasionally burn down because of these installments, and people can still suffer or die from that (as well as electrocution), the overall impact of ‘taming’ electrons is much safer and insanely advantageous over the direct use of fire for these needs.

Playing with fire and electrons is a very dangerous business, but safety measures ‘evolve’ alongside these technologies to make them safer and safer.  For example, you could insert two metal nails into a european power socket with your bare hands, and I can guarantee that it won’t feel good.  You are probably not that ‘stupid’ to try it (you have learned and understand the consequences).  Then again, you might not yet be aware that many newer power plugs have safety features built into them to not allow people (or children) to do such harmful things to themselves.  For instance, you can design these power sockets to communicate with electronics with just a small amount of energy when you first plug in a smart device, so that they have to recognize the electronic device before providing the full power needed.  Therefore, the power socket ‘knows’ that you just plugged in a ‘proper’ device, instead of two metal nails to test your body’s capacity to absorb a jolt of lightning :), and only then will it allow the full power to ‘leak’ out into the device that you plugged in.

But many would argue that the fear of ‘machines’ and technology is mostly about the A.I. stuff - Artificial Intelligence - and not the more ‘predictable’ technologies.

These days you hear all kinds of stories about how A.I. can understand human language, can understand human emotions, run scientific experiments, or even dream.  It sounds a lot like anthropomorphism, and that may very well be the case.

So let’s focus on a much clearer understanding what Artificial Intelligence really is and how it works.  If there is a single grouping of two words that you are sure to hear more and more these days, it’s “Artificial Intelligence”, and without understanding how ‘it’ works, you may both gain and project a completely distorted view of this emergent technological field.

When I search on google for “TROM” (a documentary I made a few years ago), Google tries to correct me by asking “Did you mean ‘TRON’?”.  It does that because TRON is a popular movie.  When I search “stars this week” to learn what astronomical stars I might be able to see more brightly with my telescope this week (depending on my location, weather, etc.), I get suggestions for “Dancing with the stars”, a TV show that I have no interest in, or news about Hollywood ‘stars’ that happen to be most popular this week.  Google doesn’t do this to change my views of the world, dumb me down, or to annoy me, but more simply because the Google’s A.I. is nothing more than a very long strain of code (instructions) as to how it should display results.  It may take into consideration what is more popular in the area I am connecting from, combine that with the hour and date that I created the search, or what was searched for by other people from my area of the world that day.  It will also take into account what websites are more popular and display results from those websites first.  All of this is part of an algorithm that makes Google ‘behave’ in a particular way for each individual search.

Google A.I. is not what Hollywood depicts in movies.  It is powerful, but only for a very specific task.  ‘IT’ does not wonder about stars in the night sky, nor is it at all curious about my documentary.  ‘IT’ just is, and ‘IT’ has some specific functions that people created and continue to refine for ‘IT’.

If, for instance, the Google search engine is programmed to take into account the previous search when you search for something new, then the results will become very different from what they might be without that ‘feature’.  And you can already see this.  If you ask Google “Who is Charles Darwin?” Google will answer that.  But then, if you ask “When did he die?”, Google automatically recognizes that you are still asking about Darwin and answers you “April 19, 1882”.  It gets even more interesting when you then ask “Who was his wife?”, as Google keeps the conversation going by telling you Darwin’s wife’s name.  You can continue to ask “When did she die?”, and you will again get a relevant answer based on the chain of previous questions and answers.  By this time, you may be feeling a sense of some ‘smarter’ A.I. with whom you want to have a conversation with, but remember, it’s very limited to the rules that this software follows.  Google Now is not really ‘wondering’ what your interest in Darwin might be, or whether you are an ‘atheist’ (since you searched for Darwin).  Even if you program IT to ‘create’ such associations, IT still can not ‘think’ like a human.  IT can only do those associations and display some results based upon that.

Super complex A.I. comes about when the series of instructions that it follows are many, widely varied, and dynamic (meaning that the software can add new rules and overwrite the old ones).  This is when an A.I. may become unpredictable, and potentially dangerous.  To understand this aspect, we need to first talk about the idea of ‘random’ to help better understand and weight what ‘unpredictable’ really means.  This will take up a big chunk of the article, but bear with me because understanding this concept will make you say “Aha, I get it!” at the end of the book (at least I hope so).

Random

If I were to challenge you to find a random process in nature or invent a software that creates random numbers, do you think you would be able to complete that challenge?  Let’s see.

93 ??  dada ?|/\ so what?!  “ - This seems ‘random’, right?  Well, what if it isn’t?  The lottery, weather, ideas in your head - they all appear to be random.  But are they?

“Random” is a term used to describe something that seems to use no rules to arrive at a result; an outcome with no way of understanding of how that outcome was arrived at.  You may have come across numerous ‘random’ websites where you can generate ‘random’ numbers (random number generators).  They claim to generate ‘random’ numbers using computer software.  How true is that statement?  How in the world can you program something to give you a result that is fully unpredictable or otherwise impossible to understand how it was arrive at, when you are the one who programs it to create those numbers?

Here are some numbers: 145 → 2 → 20 → 4 → 16 → 37 → 58

Are they random or not?  How can you tell?

If I were to tell you that they are not ‘random’, and that all of them are related to each other by a single ‘rule’, would you be able to figure out the rule that unites them all?

145 → 42 → 20 → 4 → 16 → 37 → 58 → X

The only way to ‘predict’ X is to understand the rule governing the string (the rule of the game).

Their properties are highlighted by this circle, as they are all connected in the following way: the rule is applied to ‘145’ to arrive at ‘42’.  The same rule is then applied to ‘42’ to create ‘20’, and so on.  A single rule is applied to each number in order to provide you with the next one.  So, if you figure out the rule being used and then apply it to ‘58’, you will find the “X” number.  And to test that you are correct, the same rule must bring you back to 145 when it is applied to “X”.  Get it?

Give it a try and see if you can figure out the rule.  Take all the time you want.  Let’s see how ‘smart’ you are, because if you crack this problem, you are a bit like an A.I..  🙂

You can check the answer here.

The rule is to add together the squares of each digit within the current result to arrive at the next number in the series.

145 → 1² + 4² + 5² = 42

42 → 4² + 2² = 20

20 → 2² + 0² = 4

4 → 4² = 16

16 → 1² + 6² = 37

37 → 3² + 7² = 58

58 → 5² + 8² = 89 (the ‘unknown’ number)

89 → 8² + 9² = 145

You see, once you know the rule behind the puzzle, it becomes so easy - but it’s very hard for a human brain to figure out what the rule is.

This is the basic idea behind generating ‘random’ numbers: sequentially apply a rule to a number that will create a string of numbers so different from each other that they seem ‘random’ to you and me.  But of course, with enough computational power and time, testing millions or billions of formulas very quickly, you can eventually crack any rule behind any such string of numbers, no matter how complex it may be, which highlights the fact that all such seemingly ‘random’ strings of numbers are not ‘random’ at all.  The only difficulty lies in cracking the rule that creates them.  Depending on a rule’s complexity, it can take present-day computers hundreds, if not millions, of years to crack complex ‘riddles’, so no worries if you were unable to figure out the rule behind the string of numbers above, even if this was a very simple one compared to how complex these rules can be made.

The puzzle above is intended to help you understand how ‘tricky’ such relationships can be, making it so hard for us to understand what rules are being applied to those relationships, even when the rules themselves are not that complicated.  Now, to more thoroughly understand this idea, let’s make up a rule that will give us ‘random’ numbers from a computer.

To do this, we’ll first need a ‘seed’ (an initial starting number).  We’ll then multiply it with itself, and then only retain the digits at the center of the resulting number, repeating the process on those retained digits.  That will serve as the rule we just invented.  It’s relatively simple, but it will give rise to more complex and ‘unpredictable’ numbers than the previous rule we played with.  The formula looks something like this: SEED*SEED= xxxxxxx.  The next ‘seed’ will be the five X’s from the middle of the newly generated number.  And then we repeat the entire thing.

So, let’s start with 46 as our initial seed.

46*46 = 2116

The new seed is “11”.

Let’s go on and generate a few more such numbers:

11*11 = 121

2*2 = 4

4*4 = 16

16*16 = 256

5*5 = 25

25*25 = 625

2*2 = 4

4*4 = 16

16*16 = 256

…..

So, the ‘random’ numbers generated are 11, 2, 4, 16, 5, 25, 2, 4, 16, 5….

As you can see, for those who do not know the rules, there seems to be no obvious relationship between them and their order, but the results also start ‘repeating’ (patterns), which makes it easier to ‘crack’ the rule used to generate them.  To make them not repeat so often, you would need to start with a bigger seed, and perhaps a more complex rule set.  This example illustrates how computers work to generate the ‘ugly’ and ‘unpredictable’ numbers that we call ‘random’.  But remember, if any patterns repeat, that is called a ‘bias’, a hint about the rule, and it makes these numbers more easily predictable.

What is more ‘random’: a real coin toss or a virtual one (simulation)?

13 - 18 - 23 - 30 - 37 - 4 - 7

Can you guess the rule behind this new string of numbers?  This one is nearly impossible (even for computers) to guess, as its rule creation was wildly different from the other ones.  How different?  This is not based on any mathematical formula.  It was rendered using this kind of sound (listen to it): sound

I generated this number with random.org (app here) and they do not use mathematical rules to generate new number strings; they use a natural phenomena to do that.  More precisely, they use atmospheric noise (the sound you just heard).  If you tune an analog radio ‘in between’ the live broadcast stations, you will hear a similar noise.  What you are hearing is actually more amazing than most people realize, because you are listening to the effects of thunderstorms across very wide distances. This noise is primarily created by lightning discharges (40 per second, worldwide) that are nearly impossible to completely predict with today’s knowledge and technology.  This noise varies according to where lightning discharges happen (distance), along with their intensity, frequency, and so on.

So, they record this noise and analyze it with a software that creates a visual representation of it.  Then they basically pick up variations in this graph (little bumps), transforming them into 0’s and 1’s (binary strings).

For instance: big and wide bumps are interpreted as 0’s, while flat and wide (no bumps) areas are interpreted as a 1.  And as you may know, computers use 0’s and 1’s to create all that you see on a computer (photos, videos, text, formulas, etc.).  Finally, using the binary strings created from that noise, you can connect the thunderstorms to the computer, and they can now ‘communicate’.

So, for instance, they can then add a rule in the computer saying 0=tail and 1=heads for a coin toss game.  Now, when you go to their website to play the coin toss game and click the button to flip a virtual coin, the software looks at this atmospheric noise and if it first reads 0 in this ‘noise’ graph (big bump), then the coin will end up tails.  If it reads 1 (flat), then it will be heads.  Get it?

10010 means: Heads - Tails - Tails - Heads - Tails, and the representation for that would look similar to the above representation.

This is really amazing, because it really means that your virtual coin will ‘land’ on heads or tails based on lightning strikes.  Go ahead and play with some of their games, as they are all based on this concept.

It should be mentioned that this system is still far from perfect.  They need to pick up this atmospheric noise within a special environment, since even a computer fan’s electrical properties can disturb it by introducing an additional, unintended pattern in the noise (bias), rendering the resulting noise ‘less random’.  But even after very carefully applying methods to eliminate those influencers, if the software analyzes a piece of that sound from ‘thunderstorms’ and ‘sees’ that bumps (0’s) occur more often than non-bumps (1’s), that means that the system is again somehow non-randomly ‘favoring’ one of the two.  That’s not good for such a system, because it needs to produce as close to a 50-50 chance for either 0’s and 1’s to be picked up from that noise.  If something is influencing this noise that causes it to ‘favor’ one of the two, it becomes more predictable.  As mentioned earlier, when patterns repeat, it is easier to crack the ‘rule’ of the game or to otherwise predict outcomes.  In this case, if the noise favors arriving at a 1, for instance, then it will be more likely for Heads to come up as a result in such a coin flip game.

To understand how patterns (‘biases’) can make even complex systems predictable, roll a die.  A real one.  What do you get?  You can only ‘land on’ a 1, 2, 3, 4, 5 or 6.  You have 1 in 6 chances to guess the number, which is pretty ‘random’.  Roll two dice and the story changes, as some numbers now have more chances to come up as a result of combinations.

If you roll two dice, it turns out that your best ‘bet’ would be on 6, 7, or 8, because you will statistically land on those more often (with 7 more often than any).  The two-dice system favors certain numbers as you roll them multiple times, making it more predictable.  So, the system of two dice has a BIAS.  If you were to simulate the roll of two dice millions of times, you would see more clearly how those 3 results come up far more often than any other numbers.  The same thing applies as you add more dice to the mix, with some numbers resulting more frequently as a result (source).

Random.org is striving to eliminate all biases to get closer and closer to 50-50 chances for arriving at 0’s and 1’s.  They have not attained ‘the ultimate’ 50-50 yet, but they are very close to it.

Random.org is the most well-known of such ‘random’ numbers generators and games based on ‘randomness’ that output unpredictable results.  But even their system is based on physical phenomena that have a deterministic value.  In other words, something creates those thunderstorms and, if we gain enough information on that and pull together enough computational power, in theory, we could learn how to predict the outcomes.  There are some other systems that ‘claim’ to produce even more ‘random’ outcomes, which you can check out here, but for the ‘purpose’ of this article, we will stick with random.org’s powerful example.

So, is a real coin flip more ‘random’ than a virtual one at random.org?

Rolling a single ‘real’ die appears ‘random’, until you employ a high-speed camera and some mathematical formulas to help you predict the outcome (read about it here).  And while a real coin toss may also seem ‘random’, they can also be predicted with the same high-speed cameras and some maths (source).  Rolling a real die and flipping a real coin involve numerous biases that you can figure out to predict outcomes: the die’s edge, corner radius, material, center of gravity, launch speed, etc.… the coin’s weight, balance point, metallurgy, etc..  All of these biases can make such ‘events’ predictable due to statistical probabilities (you roll a die many times to learn if it ‘favors’ some outcomes, and then work to figure out why - the same goes for a coin toss).

(see this video for more on this)

So, it may seem ‘counter-intuitive’ to opt for a virtual coin toss on random.org than for a real one, but the random.org one produces much more unpredictable results than the one influenced directly by humans.  Interesting, isn’t it?

In reality, it seems that nothing is truly ‘random’, because all events have a deterministic value.  Many events, such as weather, the movements of billions of atoms, or even a lottery extraction, can be based on a complex series of mechanistic events and reactions, making it appear ‘random’ to us, because we are not recognizing the exact deterministic values at work.  Many scientists say that spontaneous reactions occur in the quantum world (the world beneath the atoms), where some particles seem to just ‘pop’ into existence, but we will leave that for another article, as it requires a more detailed presentation.

All of this discussion about ‘random’ was not that ‘randomly’ chosen for this book, as this is not just about numbers, but also very related to how a huge chunk of the internet works.  Whenever you send an email, buy something online or chat via skype, you are doing all of this ‘securely’ because of the idea of ‘random’.

Computers manage data in ‘binary’, which is simply strings of 0’s and 1’s that represent the data stored on its hard drives, as well as whatever data is being worked on at any given moment (text files, video, apps, system commands, online communications, etc.).  If I send you an email with the text “How are you?”, what is actually ‘sent’ over the internet is 010010000110111101110111001000000110000101110010011001010010000001111001011011110111010100111111.  If someone ‘intercepts’ that binary string, that person can easily read the email that I just sent to you.  To make this communication secure, the idea is basically to create a secret formula (rule set) that ‘scrambles’ the content of my email message in a way that the result would make no sense to anyone.  This is very similar to how you create ‘random’ numbers using a mathematical formula, so that no one understands how they were created.

So, imagine this simple rule:

Replace all a, o, h, e, r, u, y and w letters with 1, 2, 3, 4, 5, 6, 7, respectively.  An email that says “How are you?” will then look like “328 154 726?”.  If we then add another rule to our code to eliminate spaces, the email will look like 328154726.  The computer then sends out a string of 0’s and 1’s that represent that scrambled message.  No matter who might intercept my message, he/she won’t be able to understand what that message actually says.  However, if the person receiving the message (you) has the rule we just described, you can unscramble the message and understand what I sent in my email.  This scrambling and unscrambling of data is the basic concept behind encryption.  Most modern encryption is secure because, as mentioned earlier, it would take hundreds of years for today’s supercomputers to crack their rulesets in order to decode the data/messages that they mask.  To understand more about this process watch this video - https://www.youtube.com/watch?v=M7kEpw1tn50

CRACKING

Over time, the term hacking has come to be associated with people who intentionally breach the security of a software or take advantage of discovered ‘bugs’ to cause harm.  However, that term is continuously being misused (mostly by ‘the media’), as ‘cracking’ is the proper technical term for describing that.  Hacking means “to take something apart to learn how it works”, often with the intention to positively improve on its design. So, keep that in mind.

Speaking of encryption, let’s talk a bit about software security, because even if everything is encrypted, security is not only about people intentionally trying to crack the encryption systems, security is also about unpredictability (perhaps even more so).

You may have heard of computer viruses, but have you ever wondered what they really are?  They are basically software apps or scripts (0’s and 1’s) that can wreak havoc on a system in one way or another.  Viruses are just one type of malicious software (others include trojans, worms, spyware, rootkits, etc.), so to incorporate them all, we will use the more all-encompassing term ‘malware’ (source).

To simplify this concept: if you construct a building, you cannot possibly be fully aware of the ‘health’ of every part used for it.  For example, cracks may have formed in some parts of the cement, and if water is able to infiltrate the building there, the cement may eventually weaken to the point where the entire building (or a section of it) collapses.  Holes in the building negatively affect environmental systems (A/C, heating, humidity, etc.) and may also allow rats, insects, or whatever to get in (stealing or contaminating the food), creating significant discomforts for the people making use of the building.  So, malware is to computers like cracks, water, and rats/insects are for buildings.  Some can collapse your software/operating system (like erosion for buildings), some will add annoying popups with advertising on your browser, or steal your passwords or credit cards (like rats disturbing the ‘peace’ of the ones living in the buildings, or stealing the food).

When you construct an operating system (like Windows, MAC Os, Linux), you cannot be fully aware of all that you have built.  There are simply far too many inter-connections and rules that you need to include, such as recognizing and fully supporting so many combinations of thousands of hardware possibilities, allow this code to do that, but not this, recognizing and properly loading the USB flash drive that you just inserted into your computer or recognizing that you have a video card so that your monitor works, allowing you to open and then make use of a browser - and so on.  All of these are coded rules designed into the operating system (OS).  Then consider all that’s needed for the OS to properly handle all of the other pieces of software that you choose to install within your OS (like apps, programs) to allow for all of those to work together.  Just imagine how many rules it takes for a typical operating system to work.

So, there may be some rules that you were not able to completely develop (unaware that someone might try doing something you didn’t anticipate) or rules that conflict with other rules, or missing rules.  And this happens quite often.  If some ‘rats’ (crackers) discover any improperly coded software (holes in the software), they can then infiltrate and mess with the Operating System (or any software) and may add new, undesirable rules to it.  For instance, you may have a rule that does not allow specific files and folders to be deleted, as they are critical to the proper functioning of your OS, but if there is a ‘crack’ that can allow the ability to delete them, imagine someone taking advantage of it and doing just that.

The most robust approach is to quickly patch newly discovered holes to keep your Operating System (OS) as trouble-free as possible.  As an "open-source" operating system, Linux is able to manage this approach via the support of a huge community of open-source programmers that continuously seek out and fix these holes.  Proprietary (closed) operating systems, such as Windows (and sometimes MAC OS), are created and supported by companies that mainly rely on a handful of paid employees to work on the code, so updates to their operating system (hole patching) are issued much slower than with Linux' open-source approach.  As a result, closed OS's have to try to 'kill the rats after they have entered the building'.  This is why Windows (and sometimes Mac) users usually have some kind of anti-virus system installed on their computers, as they are designed to search for particular 'rats' to kill.  Unlike Windows or MacOS, the operating system code that makes up the free, open-source Linux system is open for inspection to everyone, so anyone can report or fix holes the moment they are discovered.  This ultra-fast and widespread response network is why you do not need an anti-virus system on Linux.  This doesn't make Linux perfect, of course, but it does make it extremely robust.

Even if you create a much saner environment, where no one wants to cause ‘harm’ via such holes in any software (A.I., your smartphone, a health app, etc.), the presence of software holes can still be dangerous without intention (like rain drops that gradually weaken the cement).  You can basically write an app (a piece of code) and, without being aware of any potential problems, program the app to access some parts of the Operating System that happen to have holes in them, unintentionally causing the OS to ‘misbehave’ (errors).  So imagine that you construct a complex software that carefully monitors one’s health and, as needed, injects insulin into the body, but because the software (yours, the OS, another running app on the system, or any combination) has holes in it, the commands issued by your app may not be carried out properly, unintentionally endangering someone’s life.

This is why unpredictability can become dangerous.  Linux provides a good counter-example of how when many eyes are on the code, and when many hands are on the keyboard (without monetary pressure), you can more quickly and accurately fix these holes to make the software safer.  If it weren't for the monetary system's 'profit-motive' demand encouraging businesses to minimize these efforts, we could easily automate these testing processes, simulating hundreds of thousands of different software checks to find such holes and correct them.  Nevertheless, it’s important to recognize that unpredictability is a big part of why some pieces of code have errors or ‘allow’ others to take advantage of these holes and mess with them.  The more robust pieces of code you have, the less likelihood there is for anyone to crack them.  While the saner a society becomes, the less ‘attackers’ emerge out of it.

Artificial (or not) Intelligence:

We are finally ready to focus on Artificial Intelligence, and I hope that all that was presented so far will start to make more sense now.

Machine Learning and Complex Rules

In what category does the cat fit in?  Can you find the clue?

acerous: parrot, humpback whale, chimp,

non-acerous: goat, giraffe, cow.

So, is a cat acerous or non-acerous?

You don’t need to already know what words like ‘acerous’ mean.  Instead, look to find patterns within the groups of animals to then figure out by what criteria they are being organized, like we did for the numbers earlier.  This is how you can ‘learn’ where the cat fits in.

So, what does a goat, a giraffe, and a cow have in common?  Then, what does a parrot, a humpback whale, and a chimp have in common?  In order to classify the cat in this system, you must first figure out the ‘rule’ behind the game.  Sounds familiar, right?

If I tell you that a cat is acerous, would help you work out the rule?  Is this classification based on genetic relationships, skeleton types, brain, diet?  It will be quite hard for you to figure it out with only three examples of each, but it would become more obvious if you were given thousands of examples, and you would realize that ‘acerous’ means ‘animals without horns’.  The more clues you have, the more obvious the rule becomes, like in the case of ‘random’ numbers.  This example illustrates how machines ‘learn’.

You provide huge amounts of data and program the system to simulate and search for patterns.  If the computer program has a huge database of animals and their features to work with, it can then arrive at a statistic as to what the acerous or non-acerous animals have in common with each other.  If the statistics show that 60% of non-acerous animals share same number of legs, but 100% of them grow horns, it will record the 100% statistic as being more correct.  IT does not understand what animals are or what that group is all about, as IT is all about working with statistics.  So, now that the software has the statistic that shows by what criteria the acerous and non-acerous groups are divided, and if the computer has access to a lot of biological information about cats and that information shows that cats do not grow horns (cat ≠ horns), then it can fit the cat in the correct category.

The more data you feed to such algorithms, the better they become at statistically predicting ‘patterns’, because they can quickly find ‘biases’ in the ‘riddle’.  Statistics smartly applied.

True story: a known author wrote a book but credited a fictitious author name to publish it anonymously.  The book was later scanned with a statistical program like the one described above and, by comparison to other writings from multiple well known authors, it correctly connected the ‘anonymous’ writing with the real author’s name.

So how did that happen?

Let’s say Jack is very interested in science and astronomy, but his friend Emma is more ‘romantic’ and interested about movies.

Imagine now that we build a software that can make associations between words, so we put the rules:

stars, moon, night, telescope, math = astronomy/science

actor, night, Titanic, love = movies/romantic

If this software analyzes many of the emails that both send out to the world over a period of time, it can show that Jack is interested in astronomy/science and Emma in romantic movies.  This would only serve as a simple statistic, programmed by humans and based on the number of unique words used.  But if you also weigh the repetition rate of the words used by each of them, then the resulting statistics can become more complex and accurate.  Why?

Could you guess which one wrote this: “Did you see Apollo 13 last night?  I loved the shot of the Moon!”  It could be Jack, who is interested in astronomy and just saw a movie, or it may happen to be the more ‘romantic’ Emma, who loves movies and she just happened to see one that is about ‘space/astronomy/science’.  You and I may be terrible at guessing who wrote the email, but the software will have a much better chance at ‘guessing’, and I will explain why.

The text has the following words that both use in their writings: Apollo 13, night, moon, and love.  If Jack rarely uses “night”, and almost never used “Apollo 13”, but 50% of the time he mentions the Moon in his emails, yet Emma never used the words “moon” or “Apollo 13” at all so far in her emails, but “night” and “love” have come up 80% of the time (combined), then you can use some mathematical formulas to work out which one, statistically, is more likely to have written that message.  You may not get how I described their words use, I don’t either :), but that’s not the point.  The point is that rules like these are used to better predict who has written a particular text.  So, after analyzing and ‘weighing’ all of the statistical data, the software may say : There is an 86.4% chance that Jack wrote this email.

There is nothing ‘esoteric’ when a piece of software ‘recognizes’ you by your writing style.  It is strictly rules and formulas that give birth to statistics and probabilistics.

Face Recognition

Do you know why Facebook’s face algorithm is so good at recognizing faces (except mine)?  It’s because people tag other people, and if a person is tagged enough times, the software can recognize recurring patterns in the pixels (face), like in the ‘acerous’ animals example, and associate those patterns with your name.  Then it can ‘recognize’ you (the pixel patterns for your face) in new photos, based on these multiple examples and associations it makes and has been exposed to.  You are basically ‘teaching’ this software how to ‘recognize’ faces every time you tag someone.  But since I do not have a personal Facebook account (so there are no photos of my face with tags on them), the Facebook algorithm does not have any data about my face on which to apply that pattern recognition formula.  If you tell Facebook’s algorithm to analyze the face of a human who is not in their database, it will have no clue who that human is.

An example of the process:

- the software can assess the overall texture of skin to help determine age.  It can also detect moles and other features

- it searches for shadows and wrinkles to help determine age

- the software ‘reads’ the shape of lips to determine mood and gender

- eyebrow shapes are key to determining mood

- jewelry can help determine gender

- shadows cast by hair help determine gender

https://www.youtube.com/watch?t=434&v=8wHZ3oso618

Language and Voice Recognition

IBM’s Watson supercomputer does not really ‘understand’ human language.  It generates statistics to allow it to arrive at probabilistic outcomes.  Watson’s software analyzes language in the same way that Facebook’s system analyzes faces: it records millions of writings and statistically graphs how words are used.  For example, 85.5% of the occurrences of the word “are” were found to be associated with “you” (written one after the other, in either order), and always in the form of “are you” when the sentence ends with a question mark.

So Watson just learned an important aspect of English grammar: that within a question, the correct form is “are you”, rather than “you are”, and that learning is based completely on what it statistically deduced from analyzing a lot of English text.  You see, you don’t have to enter any grammatical rules into Watson.  Allow it to use similar statistical measurements and huge amounts of data, and IT will statistically figure out the rules on its own.  Pretty awesome, and super important!

In the same way, you don’t have to tell a similar software what ‘acerous’ means.  You just have to provide multiple examples and it will figure it out, again, based on statistics.  This is why more advanced autocorrect software works quite well.  Even if you intentionally write “How you are?” or “How is you?”, the software statistically knows that both of them should be “How are you?”.

Watch this video to better understand how IBM Watson works - https://www.youtube.com/watch?v=DywO4zksfXw

The same thing goes for voice recognition software that can ‘understand’ and process multiple accents as it transforms the sound into a graphical representation (like random.org does with atmospheric noise) and then analyzes the various patterns within it.  If thousands of people are recorded saying the word “you”, then visual representations of the sounds can become associated with that word.  “You” becomes a graphical sound pattern.  Do that for all words and you can recognize and process voice.  Do that in multiple accents and you can recognize voices even more accurately.

Automated Suggestions

Whenever you see any recommendations: from Netflix to YouTube, Spotify to Amazon, they are ALL based on the principles of statistics and patterns (what you previously bought, listened to, searched for, etc.), and the more data you provide to them (the more you buy, the more songs you listen to, and so on), the more accurate they become at ‘predicting’ what you want/like.  If you buy ‘stuff’ from Amazon that is associated with ‘astronomy’ (pillows with stars design, a book about Mars, etc.), then Amazon may recommend a telescope to you.  If you buy make-up, purses, clothes or other ‘fashion’-related items, then the recommendations of shopping websites will be based on that.  If you typically buy songs that have a particular bitrate, then songs with similar bitrates will be recommended to you.  All of this is based on pre-programmed rules implemented by Amazon.

Playing Games

Check out this video of a software playing a computer game - https://www.youtube.com/watch?v=Q70ulPJW3Gk

For anyone who is not aware of any of the information we’ve presented so far in this article, it may seem really ‘mysterious’ as to how a computer (an A.I.) could learn how to play a video game, and then become better the more it plays, undisturbed by any human intervention.  But you might now recognize that the game uses the same methods as Facebook or IBM Watson: statistics and pattern recognition.  IT starts with a bunch of ‘random’ moves (pre-programmed diverse moves that seem to follow no purpose, similar to basic ‘random’ number generators), but the moves that lead to scoring more points will also score as statistically better to adopt in its memory for use in future moves.  And so, the software adopts more and more of the moves that score the most points and will eventually be able to play that particular game extremely well.  If we were to anthropomorphize the process, it uses continuous ‘reinforcers’ to learn and perfect what works best for the task it was programmed for (source).

This kind of ‘reinforcement’ algorithm is the latest big thing in A.I. that will make these systems appear to us as though they ‘learn’ in the same way that humans do.  If you make a four-legged robot and the software only allows it to move its legs in a ‘random’ manner, but you also program it to adopt the leg movements that allow it to move forward the most, then, over numerous tries, the robot will continually re-write its software toward using whatever patterns of four-legged forward movement it arrives at.  The way it moves forward it may end up looking completely new to us humans, but again, that’s only because of the way this robot was programmed (adopt new moves that propel it forward).  This robot may ‘invent’ a better four-legged forward walking method than we have ever witnessed.

Doing Research

There is a sea worm that, if you cut off its head, it grows back again.  Nuts!  How in the world is it able to do this?  No one properly understood how genes from this flatworm are linked and work to make this possible.  That is, not until a machine came up with the answer to this 100 year old mystery in just three days.  Ain’t that amazing?  🙂  It is, but if you’re not aware of the details of the story, and perhaps overly influenced by sensationalistic movie plots, your projection may be far off.  That computer didn't sit down to ponder what a flatworm is, or even how the ‘damn’ thing grows back its head every time you cut it off.  The computer is basically a bunch of algorithms made by humans to simulate and test ‘random’ (different) scenarios about how the flatworm regenerates itself, similar to the other examples showcased so far.  In this case, scientists had to invent a custom programing language so that the software could simulate many ways that genes might work in a flatworm in order to arrive at the most statistically probable answer.  It created ‘random’ simulations and those that turned out closer to ‘regenerating the virtual worm’ were kept, while the other ones were discarded.  Repeating this process many times, similar to how the other software ‘learned’ how to play a game, the best scenarios were adopted and implemented.  In this way, it only took three days for the software to arrive at the best explanation of that process ever produced, so far (source).

Recognizing Objects

Facebook’s face recognition works mainly based on what people tag (what pixels/what faces) and correlates the pixels with the tag, but there is another, more sophisticated and useful feature that such complex algorithm can perform: you throw a bunch of photos (thousands) at them and the algorithm sorts them out by what is in the photos.  So, dogs become grouped with other dogs, cars with cars, and faces with faces.  How are they doing that?

Here’s an example: how did google A.I. recognize a cat from youtube videos, without anyone telling it what a cat looks like?  Well, they intentionally feed it thousands of videos with cats after programming it to recognize patterns in videos (pixels for example) to statistically ‘understand’ which patterns are similar within all of those videos.  Analyzing pixels and then highlighting the most probabilistic features it found (let’s say all videos have two semi ellipses - cat’s ears - and two circles - cat’s eyes), it could create a pixel-based image that looks to us like a cat - puts those ellipses and circles in relation to each other based on how it detected them from the videos it analized.  In the same way, if fed a lot of male porn, that software might draw a picture of a penis.  That may be harder to accomplish, however, as the software may become confused between the shape of an arm vs. a penis :).  So it may draw something that looks like either a small and weird arm, or an overly exaggerated long penis with tentacles (fingers) :).  Software does not discriminate between the two.  It just identifies shapes and draws sketches based on how it is programmed and what data it is fed.

The better such pattern recognition software becomes, the more accurately they will be able to ‘recognize’, tag, and sort more and more objects.  But it also depends on the characteristics of what you feed it, as a cat pattern is more easily distinguished from video materials than a penis attached to a body.

Watch this 2015 video presentation if you want to learn more about the pattern recognition process.

Recently, some news titles sensationalized the concept of “What computers dream of”, showing these images:

Interestingly, it is the same kind of software that drew a cat.

This time, the software was ‘forced’ to recognize patterns (like buildings, faces, etc.) within ‘random’ photos, and once completed, it is programmed to modify the photo with the details that it found (to emphasize the image with those details every time it finds them).  This causes the software to overemphasize these features with each editing, resulting in a ‘weird’ photo.  A.I. “dreams”, only if we can also say that cats meditate when they lie down.

But this type of feedback loop built into the software, and based on complex rules and statistics, is super powerful, as it can be thrown at anything when provided with proper data and programming for each task.  The examples I’ve shown here are oversimplifications of the actual systems behind these A.I.’s, as they have a ton of rules, but the basics are still the same.

Click anywhere on the image to shuffle through all of them faster.

Future Implementations

This complex software, that many refer to as “A.I.”, will make a huge difference, as they will be able to deal with numerous dynamic scenarios, even if only for specific tasks.

One example of how these system will significantly impact our world can be exemplified by looking at weather predictions.  To predict weather, people mainly use statistical software based on past data: what were the chances for rain for a given temperature, humidity, etc., and then applying that toward future predictions.  So, as emerging patterns are compared to similar patterns in the past, they are able to ‘predict’ what will happen.  That works rather well and you can predict the weather several days in advance.  With the new kind of statistical and programming software (A.I.) that we’ve been discussing, you can do a kind of ‘reverse engineering’, putting all past data into software so it can run multiple simulations (like they did for the flatworm).  New patterns will emerge that will make weather predictions much more reliable, perhaps for weeks in advance rather than days.  This works because we are unable to recognize nearly as many varied patterns in weather that a computer simulation can, and those may be crucial for future predictions.  The difference with this new kind of software is that it constantly feeds on huge amounts of new data, continually recognizing new patterns by performing multiple simulations on it.  Once you get the idea behind all this new approach, you will be mind-blown of the many important applications this approach can be used for.

Imagine understanding cancer, global warming, resource management, all kinds of diseases, virus proliferation, important patterns in DNA code, and a vast number of other applications!

Building a Robo-Chicken

Let’s use all that we’ve learned so far to build a robo-chicken because, why not....  🙂

Imagine that we want to build a robot chicken and apply an A.I. to it to allow it cross the road, so we can finally answer that “Why the chicken cross the road?” riddle :).

We first ‘teach’ the chicken how to walk using the ‘machine learning’ method, allowing it to adopt the best leg movements that ‘propel’ it forward and keep it level (straight).  After many tests, we have a robo-chicken that can walk forward.  We’ll then add more complexity in the movement on the same principle of ‘machine learning’ to allow the chicken to change direction.  Next we we put a camera on the robo-chicken’s head and program the software to associate certain shapes of cars with stopping the chicken from moving, but only when these shapes are of a certain speed and size, so that the chicken can cross the road if the cars are far away (small shapes) and moving slow.  Now the chicken is ready to try crossing the road, as it will wait for the proper (programmed) time: cars far away, meaning no ‘big ‘shapes’ in sight.

We can add a plethora of additional rules, such as taking into account the width of the road, or calculating what speed it should move in order to cross it more safely.  We might even add software and a microphone to it so it will ‘listen’ for particular sounds (like a honk) to re-evaluate its moves (supposing it made a ‘wrong’ move and someone is about to run over the chicken).  Adding many relevant rules will increase the complexity of this chicken’s ‘behavior’, perhaps bringing it close to that of a normal one.  But this robo-chicken will not head for a gun shop to buy a pistol to hijack a car like a frustrated human might do, unless we program it to do so.  In other words, this chicken must be programmed accordingly for any new tasks beyond crossing a road.

But we’re not done yet.  If we do not program or test it thoroughly enough, our chicken could cause an accident if, say, it crosses the street in front of an oncoming bicyclist.  Neglecting to put that data in (what a bicyclist might look like) may allow our chicken to cause havoc if it crosses streets without paying attention to cyclists (or other potential obstacles such as people).  We could even fail to understand that when it rains, the chicken’s leg movements are different so it may not properly calculate how fast it should move to successfully cross the road, increasing its likelihood of getting ‘killed’.  You see, making a robo-chicken that can cross the road is quite difficult, and many tests and adjustments must be conducted before allowing the robo-chicken to run autonomously on the streets.  Imagine, then, how many tests were required to allow for autonomous self-driving cars.  They are all required in order to be able to deal with unpredictability: holes in the software (bugs), unknown and new situations that such systems will encounter, etc..

Can the robo-chicken really become like a real chicken, or perhaps even human-like in its ‘intelligence’, if enough complexity is designed into it?

Human-Like

So, are A.I.’s actually dreaming, painting, writing songs, communicating, or whatever?  Well, yes and no.  No, because that association of ‘dreaming’ is mainly about humans in a sleep state and, even if the result is similar (a song, sentence, etc.), the way that humans and software arrive at those outcomes is not at all similar.  So the ‘yes” can only make some sort of sense when you look at the results, but not at how it arrived at those results.

The way that they ‘reinforce’ the statistical software that many refer to as A.I. may seem similar to how a human learns, but there is hardly any similarity at all.  Humans have far more input mechanisms than any machine with any software.  Humans do not adopt what is good for them or what proves to work best and then incorporate that new knowledge like a simple line of code in a database.  Humans may learn how to ski better when it happens to be their birthday, but may grow bored of skiing on a Sunday.  Humans can eat ice cream and something about the taste can cause them to ‘connect some dots’ and come up with new ideas seemingly unrelated to ice cream.  Humans are constantly bombarded with multiple stimuli (ideas, smells, sounds, memories, etc.) and they unconsciously take bits of that in, with the brain then creating massively complex associations that constantly weaken or strengthen its neural connections.

Even if you were to combine all the world’s complex statistical algorithms into one, so that they recognize objects, language, sounds, identify emotions, communicate, and so much more, all of those ‘abilities’ are always limited by the data you feed into it and the rules that you include to manage them.  Even if the rules and data are super complex, where we can talk to this robot and feel like we’re talking to a human, and this robot has attitudes, tells jokes and laughs at yours (maybe only the ‘good’ ones), and all that, for this robot to become ‘more like a human’, it must be constantly exposed to an environment from where it can pick up continuous data in the form of new ‘experiences’.  Even with all that, I doubt you could simulate a human, who has so many inputs and feelings (multiple chemical discharges), fuzzy memories and moods, ‘falls in love’, and so on.  A robot can make a ‘sad’ face, but is IT truly sad the way a human understands that emotion?

The ‘marriage’ of A.I. with human appearance attributes is due to the interface that it is often built around these tools.  They give IT a human voice, along with human-like responses, and most of us tend to be easily fooled into believing that these tools are like humans.  Google Now or Siri may say things like “Hey Jack, how was the meeting?  What did Emma think about your designs?” because, as we explained earlier, this type of software statistically ‘understands’ how to match words in a way that is comprehensible to us humans.  IT does not actually ‘wonder’ like a human does, nor is it ‘curious’ about what happened at your meeting like a human might be.  IT doesn’t ‘care’ about any of that, as IT only looked over your emails and came up with a good match of words for the interaction with you, and this match of words merely appeared in the form of a ‘question’ to you.  If a robot is programmed to simulate a sneeze and ‘leak’ water from tiny holes in its metal body, would we say that the robot has a flu?  Of course not.  It only simulates some outward flu-like symptoms.  In very much the same way, a robot cannot be ‘sad’.  IT can only simulate some outward symptoms of being sad, like facial expressions.  When humans experience ‘sad’, they also experience many thoughts and other feelings surrounding those thoughts.  They may have trouble focusing, become angry and curse, or may even vomit from the inner turmoil they are experiencing.

Perhaps at some point, humans will be able to make a machine that has as many inputs as humans, and can learn alongside humans similar to how a child does (here is one recent path towards an artificial neural system).  But even then, the strong suspicion is that it will still be widely different from how a human is, because humans are different from each other based on their “total environment”.  It’s the environment that makes a human.  So this imaginary future complex statistical algorithm can only reflect some characteristics of some humans under some certain circumstances and periods of time.  Of course, all of that is purely imagination, as today’s software is still very far away from anything like that, and it may never reach that point, as the future that our technology is directed towards seems to be on a completely different path: not simulating humans, but developing something much better that humans can use.

No one has even created “artificial stupidity” that resembles a human, yet alone artificial intelligence.  What humans have created so far are much more clearly described as complex pieces of software and sensors that, based on statistics and predetermined mathematical rules, can arrive at very ‘educated’ guesses in a very short amount of time.

Another news title I saw recently: “Google’s artificial-intelligence bot says the purpose of living is 'to live forever’”.  But of course the software that Google is using can only output some matching words based on statistics and smart algorithms, and only to the collection of words that are fed into it as questions.

Human: What is the purpose of life?

Machine: To serve the greater good.

I would say that such responses sound more like artificial stupidity, and I would recommend to Google that they feed their algorithm with our article on Purpose and Evolution :), or some articles on the evolution of words and general semantics.  Perhaps after feeding it relevant data and relevant algorithms, their statistical software would have replied “I do not think the question makes sense.”

The Turing Test is supposedly the ultimate test for ‘intelligence’ (or at least it was until recently).  Here’s how it works: Imagine you are communicating via a text chat with ‘someone’, not knowing whether it is a human or a machine.  Based only on the conversation, would you be able to tell which one you are communicating with?  Well, it probability depends on whether you watch movies too much, or if you read science, or listen to music.  Basically, it depends on who you are, what questions you choose to ask and what you deduce from all of the conversation.

There may be people who will ask more relevant and complex questions and will figure out rather quickly that they are talking to a machine instead of a human being, and perhaps some that would ask relatively simple questions and be easily fooled by a much simpler A.I. system.

What if you ask “What is 3454*4546?”  Would it look like a software program if it answers quickly?  What if it’s a human being with a calculator, or one messing with you by giving you a ‘random’ result of the equation that you can’t quickly check?  Or perhaps a rare human that can multiply very quickly?  Or a software that does not have multiplications programmed into its software and is unable to answer?  Being able to ‘talk’ with software the same way that you talk to your friends does not mean the software is ‘intelligent’, nor does it mean that you are :), or me, or anyone else.  It only means that both parties are agreeing to a certain degree on a set of rules that they are using to exchange information.  It’s rather hard to tell how well that process works because, for example, this book is being explained by a human (me), but is being ‘understood’ by a wide variety of people, each in their own unique way, and perhaps not at all by some.  Maybe some know far more than me about programming and, as a consequence, understand better what it is being explained in this article than I do, while maybe others know very little about technology and simply cannot comprehend most of the points I am trying to make.

The Turing Test can only test some software for certain ‘abilities’, e.g. making statistical sense of some human language rules.  If a software passes the Turing Test, that does not mean that it can drive a car, wonder about exoplanets, get angry, care about your meeting, and so on.

Next time you ‘talk’ with Google Now, Siri, Cortana or, in the future, with IBM Watson, ask them: “Then why how, is it you that can be around?” to see if any of them think of you as nuts, or think of it as being a riddle, or are not in the ‘mood’ for answering that question.  You could probably simulate all of those in software A.I., but they would only be simulations that can go so far.  A human has ‘moods’, and responds or reacts to questions and statements based on their environment (internal, as well as external), culture and whatever complexities may be associated with the situation they happen to be in at the moment (both conscious and subconscious).  In other words, a human is massively more dynamic and unknowable.  A human might slap you in the face or grow angry if s/he finds your question annoying or offensive.  Software does not have that, as it’s pre-programmed to follow a particular path, no matter how complex that path may have been made for it.

If a human being learns how to play a game, that experience will help him/her in playing other games, as well as influence his/her overall behavior.  Software, if it learns a game, has no idea how to play another one.  You would need to reprogram the software before it could learn a different game, even if the software is based on similar ‘machine learning’ concepts.

A human’s many input mechanisms are to A.I. what random.org is to a simple random generator website/app.  Humans are so complex in the ways that they pick up information from the environment that we may call their associations ‘random’.  For example, if you happen to drink orange juice while learning maths, the taste of orange juice may influence you to learn better for the rest of your life.  That’s how ‘sensitive’ humans are to these inputs.  They are like random.org’s atmospheric noise in the way they ‘ingest’ information from the environment, and in how that information is then processed.

But what is the point of ‘simulating’ a human or, more to the point, a human brain?  How can you refer to a simulated brain as being human-like when it would be fully devoid of all other inputs - feelings as chemicals, the sense of balance, low or high energy levels of the brain induced by the state of the rest of the body, etc.?

Airplanes do not have wings like birds do, trains and cars do not have legs, and the fastest boats make use of propellers instead of fins.  There are many, many cases in which it makes little sense to try to mimic nature, because you can invent much more efficient systems to manage the specific goals you wish to accomplish.  Today’s A.I.’s are not being designed to simulate the human brain.  As a result, they are becoming increasingly extraordinary at performing complex tasks that no human brain could do, becoming an extension of us humans.

Can these systems become dangerous?

Is is possible for the varied samples of A.I. that we have presented so far to become unpredictable to the point that they can be dangerous?

When we tried to drive a car to that lighthouse, Google did not try to kill us by mapping out a very dangerous road for us to traverse.  It was more like the robo-chicken or an operating system, as the situations you may encounter on today’s roads are so vast and varied, and there are so many roads that Google’s system cannot predict them all.  In our case, that road may have been closed or otherwise re-classified after Google ‘indexed’ it, and Google wasn’t updated about it.  That is a ‘hole’ in the software, just like holes in your Operating System that causes errors.  Such systems cannot be perfect, which is a word that makes no sense today, since such systems are measured in accuracy percentage.  For instance, Google’s self-driving car may only be 97% safe, but that qualifies as a great achievement, considering how complex it is to make such an autonomous vehicle.  Some people may find themselves lost due to following Google Maps recommendations, or perhaps even involved in accidents, but for the vast majority, there are no issues related to making use of that system.

Such systems are so complex that there will always be some degree of unpredictability.  This is why these system need to constantly update to become better, and that is also the key to making them even more predictable.  It’s the Linux approach, but on steroids, and becoming increasingly automated all the time.

Some scientists dealing with such A.I. (complex statistical machines and algorithms) admit that they do not fully understand how the software arrived at a decision or ‘learned’ a new skill (play a game, write a sentence, etc.).  This is directly related to all that we have explained so far in regards to ‘random’: from initially basic rules, these algorithms grow into very complex sets of outcomes that become very difficult, if not impossible, to reverse engineer.  Nevertheless, even when you find no patterns in how such a system arrives at a result, you can still understand how they work by discovering ‘biases’ and patterns in what they output (behavior).

The key factor behind understanding that such systems can be considered ‘safe’, even when not fully predictable, is the following: These systems are in a closed loop!  The self-driving car A.I. ‘knows’ how to operate its car and avoid obstacles, IT has no idea how to play a video game or answer questions.  If you put that amazing software that ‘learned’ how to play that game into a car, it will not be able to start it, let alone attempt driving it.  Even if you want to use the software that determined how a flatworm regenerates in order to understand how other animals regenerate, you will have zero success.  You would first have to add new data to that software and new methods of testing regenerative processes that are relevant to the new ‘creature’ that you want it to analyze.  This is why Hollywood depictions of A.I. are so far off and primitive, as they imagine A.I. to behave like a human, rather than what it really is.  “Powerful Statistics” sounds rather ‘lame’, doesn’t it?  But that’s much, much closer to reality than calling it “Artificial Intelligence”.  There are army drones that are programmed to kill humans within particular areas (members of other tribes), but you will never hear of a US army drone turning back to attack US troops because it felt unfair to kill those people from Iraq (or whatever).  Such systems are far from being like humans.  They can manage complex and powerful associations when they are programmed correctly and vast amounts of relevant data is fed into them, but that’s all they can do.  They can arrive at new data, and then work further with those results, but only within a closed loop and based on their programmed rulesets.  Google’s self-driving car will not, not matter how many tests and simulations you do with it, come to understand human language, or turn its security cameras to the sky and think that a better use for them would be to ‘hunt’ for alien species instead of avoiding obstacles.  Those are just primitive human concepts promoted vividly by a world-wide-dumbed-down media.  Fearing that such A.I. will try to ‘take over’ is like fearing that your smartphone will try to mess up your messages today in order to to sabotage your relationships.  It’s nothing more than a ridiculous, uneducated fear.

Even in cases where such pieces of software become too complex and hard to predict, regardless of whether they are ‘in a box’ (closed system) or not, we can already predict systems that are based on ‘randomness’.  Let’s look at some human behavior, since what could be more unpredictable than that, right?  After all, humans try to make A.I. so complex that it performs similar to humans (which appears to be of no practical use).  So, let’s see how we can cope with humans from the perspective of ‘unpredictability’, as humans seem to exhibit a lot of ‘random’ behaviors.  Is it true though?

Here’s a simple game (please don’t cheat!).  Quickly choose a ‘random’ number between 1 and 10.  It should be the first number that comes into your head.  Got it?  Click this to see if I guessed your choice.(7)

Most people choose that number, and this is statistically drawn by simply asking many people to choose a ‘random’ number between 1 and 10.  I may have no idea how you arrived at that answer, yet for most of you, I still guessed it.  And if you did not choose that number, go and ask around and you will find that most people will choose that number.  

You see, even when we deal with very unpredictable systems (like a human being), there are methods for predicting some of their outcomes (perhaps many), and this is enormously important.  Here’s why:

Let’s look to random.org again, since it produces sets of results that are nearly impossible to understand their deterministic value because… well… just try to predict how thunderstorms (40 ligthing discharges a second) influence the overall atmospheric noise.  See how good you are at that :).  Even with that amount of complexity, we can still ‘beat’ random.org at some of its games.

If you go on their website and play the coin flip game, or if you have their app on your phone and open up the same game, you will (of course) find it impossible to guess the series of coin tosses (if they are tails or heads).  So, what if I dare you to play a game with me, using the same system, and whoever guesses three consecutive coin tosses, wins?  Guessing three coin tosses in a row seems even harder to do.  However, if you are the first to choose your ‘random’ guesses of three consecutive coin flips, I can then make my guesses more ‘mathematically educated’, allowing me to guess the results of three random tosses in a row more times than you will.  My chances of winning at this game are far superior to yours, ALL THE TIME, even when using random.org.

You can play this game to test it, and the rules are very simple.  If anyone picks, for instance, Heads Tails Tails (HTT), all you have to do is to take the first two of them (HT) and put them last, and then make your first ‘letter’ the opposite of the last one in the other person’s guess (T becomes H in our case).  So for HTT, you end up choosing HTH.

Examples:

If you choose TTT - I choose HTT

If you choose THH - I choose TTH

If you choose HHH - I choose THH

and so on.

To better understand this approach, see this video explanation of the game
https://www.youtube.com/watch?v=IMsa-qBlPIE

You can also read about it on Wikipedia.

You will be surprised how accurate your ‘guesses’ will be using this method.

Remember, random.org is quite ‘random’, meaning quite unpredictable in how it arrives at the outcomes.  Even so, we can make sense of the results in certain circumstance, discovering patterns and predicting future results.

So, let’s get back to you.  How did I predict your number (if I did that)?  If you use random.org to try to generate a ‘random’ number between 1 and 10, it won’t statistically be 7 any more than any of the other possibilities.  But it is for humans.  There are polls where thousands of people from around the world played this game and, overwhelmingly, chose 7.  In attempting to try to explain why, they came up with many different guesses: because there are 7 dwarfs in Snow White, due to the James Bond movie character’s “007” handle, because there are 7 days in a week, and so on.  Maybe you chose it because I mentioned earlier in the article that 7 is an important number when rolling two dice.  No one truly understands why most people pick that number, but it seems to be purely cultural, even if we cannot understand the exact process of how the decision to pick that number is made.

There are people in the world that, because they were never exposed to the idea of numbers, do not understand them.  In this documentary, someone asks a man from a tribe to say how many children he has, so the man replied with “Jimmy, John, Clara, Max” (not the actual names), but he did not have the concept of ‘four’.  The most he could do was to draw four lines in the sand to show how many children he had, but can not understand why you and I would call it ‘four lines’.  People are definitely not born with the ability to recognize numbers, but they are taught what these symbols are by their culture.  It is more likely that people choose ‘7’ because we live in very similar environments on many perspectives: the days of the week are basically the same in all ‘modern’ tribes, the maths is universal, and the same stories and movies are pretty much well-known to most of us.

Human behavior is similar to ‘random’ numbers generated by random.org, in that you can’t guess all that well how they will behave or what numbers will be generated.  But the more humans you study, like the more numbers you have, the easier it becomes for you to spot patterns (biases) and predict outcomes, rendering their behavior (like it renders ‘random’ numbers) as non-random.  Both human behavior and certain number strings only seem ‘random’ until you are able to see their patterns (biases).

If you take five people from five different tribes and tell them “Do something random for five seconds”, it is very likely that they will do very different things: facial expressions, dance moves, sounds…  Given that experience, you would be inclined to say that people do indeed do ‘random’ things.  But take 10 million people and you will definitely find numerous patterns that you can track back to their individual tribal cultures (perhaps similar facial expressions, dance moves or sounds).

The thing is, rolling two or more dice just once, you will get similar chances (or less) to rolling just one die (source).  It’s only when you roll two or more dice multiple times that you will discover the pattern of favoring certain numbers as we explained (source).  The same thing goes for human behavior, as the more sampling and tests you use to reveal more patterns, the more chances you gain toward predictability.

Why do you think most people want to get married, but only with one other human being if they are not of the Islamic ‘faith’?  Why do you think heterosexual men are attracted to certain women?  Or that they like women at all?  Why do you think women tend to have long hair?  All of those ‘patterns’ serve as examples (proofs) of how humans are anything but ‘random’.  They are ‘biases’ that you can find when you look at many humans (lots of data), and these ‘biases’ make such systems predictable (remember?).

We can predict to certain degrees how humans will behave under many circumstances.  For example, if someone goes running down a street naked, we can predict what facial expressions the witnessing men or women are likely to make, and how offended they may (or not) become.  And all of that is based on observing how multiple humans behave across various cultures to discover the ‘biases’ within their culture.  If they are raised in the US, Spain or Romania, then most will feel offended by the sight of a naked human in a public space.  Assuming the ‘streaker’ is male, they may try to beat him up in Romania, while in US, they may arrest him instead.  All of that is predictable to a reasonable degree.  Of course it’s true that ‘some’ observers may react in a more unpredictable way, just as every roll of two dice won't produce a 6, 7 or 8 result, but the statistical majority will behave according to our ‘predictions’.

If we move our ‘naked man’ from one of the ‘modern’ tribes to a tribe that is used to nudity (perhaps because everyone is nude there), then we can also predict how his witnesses will react.  As you can imagine, they will not accord much attention to the naked guy.  However, they will if the guy has a significantly different color of skin than what they are used to seeing.

What I am trying to express here is that the notion of “free will” is completely bogus.  If people were to behave more ‘randomly’ than not, we would be unable to communicate with each other, or have any kind of society for that matter, as all people would be doing very different things.

The interesting thing with humans is that you can even predict their ‘individual’ behavior.  You don't need that many samples, but larger samples are always useful to help increase the accuracy of your ‘predictions’.

We can even use reinforcers to manipulate human behavior; the same kinds of reinforcers that are behind the idea of ‘machine learning’.  But we will go into greater detail on that in a separate article.

People are essentially machines, even in the way they think.  Human behavior functions deterministically within groups, but are more like the atmospheric noise when sampled individualistically.

The key to greater predictability in both A.I. systems and human behavior is more sampling, testing and simulating.  And even if you are unable to fully understand what exactly allowed an A.I. to come up with a new cancer treatment, or for a human to kill another human, you can still predict and manipulate such complex systems, be it by trying to reverse engineering them (like decoding an encrypted message), or by increased sampling and testing (analyze more humans to learn the patterns in their violent behavior - it may turn out related to a lack of money, stress, little love and care from parents, and/or any number of other environmental factors in and throughout their lives).

Think again about fire.  For thousands of years, people had to ‘tame’ it without any real understanding of how it works or how to best manage the dangers.  With today’s newer technologies and sophisticated software, we are able to track down and learn deeply about ‘taming’ the dangers, and perhaps predict even the most unpredictable pieces of software.  For example, when IBM Watson comes up with sophisticated suggestions for medical treatments, it records every move/decision/association it makes so that you can track all of the processes it went through to arrive at those decisions.

Worrying about A.I. is like worrying about Genetically Modified Organisms (GMO).  You have to specifically point out just what it is that you're worrying about, as Genetically Modified Organisms come in many forms and from many techniques, while A.I. also comes in many different flavors and systems.  Both A.I. and GMO may be dangerous when unpredictable, which is why you have to make sure that the world you live in is not putting roadblocks to testing and experimenting or safety implementations, like it does today due to its controlling profit-motive.

So, the next time you hear something about “Artificial Intelligence“, replace those two words with the phrase “Powerful Statistics” and see how ‘sensational’ those news article end up sounding.

There are already numerous A.I. systems that influence your life, perhaps even endangering it.  For example, Google’s ‘search’ A.I. may be set up to recommend to you what is currently popular each time you perform a search or use a service like Google Now.  Where the society is dumbed down, the results it provides transforms its ‘customers’ (you and I) into yet more dumbed down creatures, and the loop continues, feeding into the system even more dumbed down content supported by the dumbed down people.  This may seem like no real concern or danger for you or other people’s lives, but if you continually feed unreal news to people (like those about A.I. - ironically fed to them through another A.I.), then “Powerful Statistics” easily show that people are much more likely to grow strong, unreal fears about adopting such new systems that aim to save many lives (cancer research, nanobots in medicine, genetic engineering, stem cell research, etc.).  Your life is also in danger when such A.I.’s are misused by surveillance systems to detect, by whatever you write, watch, download, visit, etc., whether you fit into a profile category that the present-day ‘justice’ systems has deemed as ‘criminal’ and arrest you for maybe watching some particular movies that are considered illegal, or accessing some websites that are considered illegal, or just by using some words in a certain order that have been labeled as implying some kind of ‘terrorist threat’.  Additionally, autonomous drones may kill innocent people because of how they are programmed.  And even when more accurately killing just those that it was programmed to target, such abhorrent misuse of technology only creates more enemies and fear of such A.I.’s, as wars are never the answer to solving differences or conflicts.  But of course, all of that reflects severely corrupted aspects of the culture, not the software.

Complex decision for a software to take:

Imagine that we now have a global society on place where money (trade) won't exist, thus the influence of that will not be present.  How would A.I. systems function in a rare situation like the following: let’s imagine that we have an autonomous airplane transporting 100 people over a crowded city, but unfortunately, it runs into a flock of birds causing some of its engines fail.  The only options available to the A.I. is to either crash-land the airplane in an empty field outside of the city, or try to land at the city’s airport, with an 80% chance of not reaching the destination and crashing into the core of the city, killing an estimated 1,000 people.  How would you program the software for handling such a potential situation?

First of all, we must put a very serious emphasis on the following: you will always strive to avoid such situations as much as possible.  Thinking about such scenarios without taking this focus into account is irresponsibly unreal.  As a result, you will never have systems moving airplanes over crowded cities, or airplanes without backups for all important systems (landing, propulsion, etc.).  You can easily reduce fatal occurrences to such a low that it will be more probable for someone to be killed by a wayward 3 cm. meteorite (and then develop systems to prevent that, too).  To really understand how unreal such imaginative, fear-based scenarios are, let’s look again at Google’s self-driving car.  These cars always run at the highest-rated speed limits when there is no possible accident ‘in sight’ for a certain radius that will allow it to stop from that speed.  So, if it’s radars and all other systems detect a ‘clear’ road with nothing to adversely affect tire traction (bad weather, for example), then the car can run with the full speed that the road allows, but when there are risks (statistically) that the car ‘sees’ (traffic crowding, rain or icy conditions, etc.), then the car is programmed to reduce its speed, so even if something unexpected runs in front of it, it has plenty of time to stop.

You can watch this recent TED video explaining how Google’s self-driving car deals with such situations - https://www.youtube.com/watch?v=tiwVMrTLUWg

That being said, and assuming that you understand that significant safety measures are paramount for autonomous systems, in a case where a software (A.I.) has to deal with choosing between 100 and 1,000 lives, then I for one see no answer to that, because I refuse to think that such situations will ever happen with the society that we describe in detail in our book: "The Money Game and Beyond".  Even if it does ever happen, the human race will very quickly develop the necessary solution to prevent such a scenario from reoccurring that, again, is extremely unlikely to happen if such systems are properly implemented.  This is similar to thinking that tall buildings might fall over onto people’s heads…  Well, you do everything possible at the time to make sure that never happens and, if it ever does happen, then you learn how to make better buildings.  There is no point in hunting down the designer of a building to accuse him, as that never addresses the issue.

Summary

Let’s try to summarize this mammoth:

Artificial Intelligence is a severely hyped notion of complex statistical software with complex mathematics behind it, all contained within a closed loop.  The good news is that it is hyped in the dumbed-down direction, one that tries to compare them to humans and anthropomorphizes them, a lot.  Today’s A.I. software is more capable than a human brain at so many levels that it seems very counter-intuitive (backward) to try to make them work like the human brain.  But even if someone tries that, we are nowhere near achieving that, nor is there any visible path towards achieving that level of complexity or mechanical dynamics.  It also seems to me that less-educated people who think that A.I. (these software) refers to machines becoming increasingly more and more human-like completely ignore that human thinking is fully created by the environment they are exposed to throughout their lives.  There is no inbuilt, inherited ‘reason’ in humans, and the same applies to intelligence.

It was a fictional book about some fictional people who built the smartest fictional Artificial Intelligence (computer) and asked the question: “What is the meaning of life?”.  The machine replied that it will take millions of years for it to fully calculate the answer.  The people waited for that, passing the information about this machine from generation to generation, making IT like a God that people were awaiting for the ultimate answer.  And then the moment arrived: all were gathering to hear the answer, lots of emotions and a huge cult that had developed around IT.  And IT finally said: I have the answer to your question “What is the meaning of life?”.  The answer is 42!  🙂

Perhaps after reading this book, it now makes more sense to you how such a fictional story is not far from the truth because, more than likely, that fictional A.I. was either fed insufficient information for what it was programmed for, it was ask a not-so relevant question and the associations it made from that were so nonsensical that it arrived at 42 as the answer, or that the fictional A.I. made so many complex associations because of its complex software that its answer may be very scientifically relevant, but too ‘random’ for people who had no idea how it arrived at that answer, or where/how to apply it.

Intentionality and Money are the issues that we are right to fear when it comes to such complex machines, which is why it is essential to tackle those first, and the only way I am aware of doing that is to move towards a global society of abundance (one where any form of trade - money, bitcoin, whatever - is obsolete) because in such a society very few (if any) would be motivated to intentionally ‘animate’ these tools in a harmful direction.  Dealing with unpredictability seems to be much less of a concern within such a society.

I hope you understand that ‘our’ relationship with the tools we invent is very complex, and it is extremely dependent on the culture/system, as these tools can be used to cure diseases, create abundance and positively affect our lives overall, or they can be ‘used’ to make our lives miserable and hang in perpetual danger.  I also hope you now better understand the blurry line between humans and machines, yet still understand the differences where it is obvious.  

But the most important aspect that I’d like you to take away from this book is the huge discrepancy between what most media outlets present about technology and ourselves, compared to what the reality actually is.  Nanobots, ‘cyborgs’, A.I.’s, 3-D printed organs, and so on, have a much deeper and important impact value for everyone when they are properly understood within their scientific light, rather than them being twisted, exaggerated and bleached for ‘entertaining’ presentations or as sensationalistic items competitively intended to increase a news channel’s viewer ratings.

All of the technologies presented in this book represent what they are capable of ‘as of this writing’, which means that a month from now, one year, or 100, they will have refined that much further.  Technology continually moves forward at an exponential rate, and if it’s managed by a saner society, the prospect for them to become extraordinarily important to our lives is enormous.  This is why we strongly argue that what we need to do is to strive to change the structure of society, in order to allow for these technologies (and much more) to develop quicker, safer, and more relevant to the needs of all life.

Additional resources :

  1. Courses on Artificial Intelligence: Intro to Machine Learning, Intro to Artificial Intelligence, Knowledge-Based AI: Cognitive Systems
  2. Our book on how to automate the entire world: AA WORLD
[wpfai_social] 
[/showhide]

THIS IS A TRADE-FREE PROJECT

Live Updates

Live updates about TROM Project

If you want to stay up to date with everything we do for this project, you can check these live updates from any tromsite.com page.
You can also click people's names to go to their personal online places (websites).

Tio
My parents are using TROM-Jaro for the past half year or so. And Linux (Ubuntu or other flavors) for the past 3-4 years or more. They simply need a stable operating system and a way for them to access the Internet without "trading" as much. In truth my first motivation to make a custom Linux OS was to do it for my parents because I wanted them to have a "ready to use" OS that blocks ads and trackers and it is simple enough for them to use. In fact I tested TROM-Jaro with them first, because if they can use it, most people can. They are not at all computer savvy and probably most people are like that, so we need an operating system that ca be used by anyone. They know very well how to keep the system updated now, and because TROM-Jaro has automatic backups before every update then it is easy for me to fix their computers in case an update breaks the system, which never happened so far.

I also installed TROM-Jaro on a half a laptop (that had no screen anymore) and I connected it with a TV so now they have a "smart TV" to watch online whatever they want to watch. I paired that with a cheap Bluetooth remote controller that worked out of the box and so now they have 3 devices with TROM-Jaro.

So, it is super useful that TROM-Jaro is set up for people who don't know much about computers because I make sure my parents don't see ads or are tracked online, have automatic backups and the interface is so easy to use that they can find whatever they need by pressing or clicking one button (the Dash).

The laptops have between 2 and 4 GB of RAM and a simple integrated Intel graphics card, and the TV has 1.6GB of RAM. TROM-Jaro works well on all of them out of the box. Chat image
Tio
Thank you Savadey for the 100 AUD via Paypal and the message "keep up the good work " - Awesome! Lately more and more financial support for TROM which is fantastic! Thank you so much people https://www.tromsite.com/donate/
Tio
Here's a video I edited for Sasha for her campaign :) https://www.youtube.com/watch?v=ABbNeEmhMh0 - watch it! it is cool. very. interesting. talks about us too. also a footage with her, me and aaron somewhere there...
Tio
Sasha needs some help to finish a Trade-Free Book about her journey. https://www.gofundme.com/f/big-world-small-sasha-the-book - she will post the book as Trade-Free on her website in digital format.

Sasha has been a great help for TROM for the past 2 years. She helped TROM financially (quite a lot), with proofreading or by spreading the word about TROM and Trade-Free on her blogs, social networks, and she even created a TROM Discussions Youtube stream for a while where she was discussing TROM subjects with new-to-the-project people. Her website, https://www.bigworldsmallsasha.com/ is labelled as Trade-Free.

She is also now part of our small "TROM family" - I (Tio), Aaron and her share a flat that we transformed into our small TROM office (will post videos and photos about this in the following months), and we help each other financially to be able to keep working on these projects. So, any help for Sasha is also a help for TROM ;) - she needs financial help to be able to focus on finishing her book (she already wrote some 70%) - needless to say in the book she talks about TROM too ;).

What is important about what she does is that she reaches new people (people interested in travelling around the world and having this different kind of lifestyle) - these people we could not (or rarely) reach with our projects. Now such people get to also know about TROM.

Big World, Small Sasha!
Tio
Here is a review of TROM-Jaro by Alexio :) - https://www.facebook.com/tromproject/posts/2619863141403328?__tn__=K-R (it is too big to post the entire text on our live) Chat image
Tio
Keep in mind that Manjaro is one of the most well known and used Linux distributions. If they said they may incorporate some of our good ideas into their official Gnome edition, then that's big :)
Tio
We are getting more attention for our TROM-Jaro from Manjaro's community which is fantastic - see here https://forum.manjaro.org/t/trom-jaro-a-trade-free-manjaro-build/93739 - One recent message was very encouraging, comming from a member of the Manjaro's Team: "This distro makes me happy. It's cool that Manjaro has a daughter distro, with clear vision, ethics and style. I'm hoping to borrow some of your good ideas to gnome edition. :) " - He also said he can help with some stuff, which is truly amazing. Imagine if this trade-free idea catches on in the open source community. It could be a fantastic way to make people aware of what creates most of our issues in the society, not only in the open source. Let's see where this leads, but so far this idea with "trade as the source of most problems" and focusing on trade-free goods and services as a "cure" for it, seems to catch on little by little.
Tio
Did a bunch of work for tromsite.com - improved some stuff, fixed some bugs. this site is made of so many many many parts, yet the way it is built it seems like one. is almost like we took cats, dogs and elephants, and created one single creature but you could not tell that this monster is made out of other animals :))
Tio
Ah, and when you download them you download them in multiple video formats + their respective subtitles. We'll work to incorporate any other video we have on tromsite through the Internet Archive.
Tio
Another super cool thing thanks to the trade-free Internet Archive: on the videos page you can now download any series you want form p2p https://www.tromsite.com/videos/ - and everytime we add new content to any of the video series we make, the download links gets updated automatically. Much less work for us! :)
Tio
Now you can get TROM-Jaro iso from both our server and bittorrent https://www.tromjaro.com/install/ - both should work all the time. We use the Internet Archive as the torrent source since they also provide a seedbox so it guarantees peers. We got 631 vies on the manjaro forum for our tromjaro post. And a bunch of people got interested in-it. That's awesome! I hope to find more time to add more apps to tromjaro.com.
Tio
Working on the Fort-Profit Entertainment book by Dima! :)
Tio
So these days we had a TROM-Cast, like we usually do, but this time we discuss about our new trade-free.org project. It got the attention of some people, and new ones participated for the cast challenging the idea. To make it super-mega short, I “created” this path now for “social change” via “trade as the source of most problems”, and some think is not a good idea. I’ll keep it all extremely brief since I already wrote thousand of pages on the topic. Here's the blog I wrote https://www.tiotrom.com/2019/07/why-trade-and-not-scarcity/
Tio
Thank you Jennifer for the 100 Euros donation. Wow! And for the message "Thank you for all that you do! ". :) - https://www.tromsite.com/donate/
Tio
I sent 20 or so emails to several people in the Linux community about tromjaro. Let's see if any pays attention to it.
Sasha
Tio and I have been working very hard on my new site for the last 2 months and now it is finally done! Check it out: https://www.bigworldsmallsasha.com/

It’s a trade-free travel blog that details how you can travel around the world with little money, but the main idea is actually to entice people with travel stories in order to introduce them to the important topics that we discuss here in TROM. So the point is to reach new people :)

These two pages (and a lot of my blogs) include links to TROM material: https://www.bigworldsmallsasha.com/traveltips/ [link src="http://www.bigworldsmallsasha.com/lifestyle/

The"] website was designed by Tio – I can’t thank him enough for the enormous amount of effort he put into my site. Tio is by far the kindest and most genuinely extensional person that I know. And also- genius! :D and very creative. All the photos of my stuff, Polaroid theme and basically everything else on my site was his idea :). And we spent many hours/days/weeks customizing it to make it look like it does now.

I hope you enjoy it :)
Tio
I fixed trade-free.org for mobiles ;)
Tio
Cody: "I use trom-jaro to easily edit videos and maintain multiple systems that help decentralize some of the tools and videos TROM creates! I also use it to host the TROM-Cast weekly and thanks to it's lack of tracking / data harvesting, my computers resources can be freed to do what I WANT =]" Chat image
Tio
Organized TRM Riot a lot since we fuckin' grow :D - we need a room for video, one for tromjaro, one for webdev, and so forth. I love it! Also, Alexio joined the team :) - he organized TROM stuff so neatly and awesome that we need to use his powers! hehe
Tio
David Panart just became a $15 patron! - thank you so much! :) - monthly donations help a lot! https://www.tromsite.com/donate/
Tio
The brilliant Roma is working on something super cool so that anyone can translate the trade-free.org website in a bit. The general idea is this: people have access to a public file on gitlab that contains the english text and all of the links on the page. And they simply translate that. Then trade-free.org can call that page everytime people add the lang after the url, like say trade-free.org/es triggers the ES (spanish) translation. Of course, anyone can use that file + the HTML5 version of the website to host it anywhere. Roma is like Ziad it seems. Two brilliant people who are helping TROM in ways I could not do! :)
Tio
We moved our videos to The Internet Archive :) - a trade-free alternative to youtube which is an abundant service that is based on trades :) - we love the Internet Archive. You can see our new and improved videos page here https://www.tromsite.com/videos/ - now it is way easier to access all of our videos. Unfortunately we don't now why the subtitles are not working right now....we contacted IA...we'll see. We want to also move every other video there. Next step would be to stream via p2p and not even rely on IA. But for now this is the first major step to move away from Youtube and Vimeo ;)
Aaron
TROM-Jaro is my tool to create trade-free things. I use it to translate TROM materials into german, such as videos and audiobooks with the video-editor kdenlive + audacity and books with the partially trade-free app Master PDF Editor. I communicate with humans via Signal, Riot and Thunderbird and I work on my personal website + browse the web with Firefox. All trade-free. Plus I decentralize our materials with easy-to-use apps like webtorrent and the beaker browser. All in all I love tromjaro as it is fast, easy to use and it simply works. The fact that it works on my macbook is even another big plus-point. www.tromjaro.com Chat image
Tio
We'll skip tromcast today. I am too tired :D - will do one next week for sure. I'm working on other stuff too, so I'll be busy these days, especially today
Tio
Thank you Roma for the 86 Euros donation :) and the awesome message:

"Trade-free. org is great from both sides: as an idea and the site itself is amazingly crafted! Kudos to all who was involved. Appreciating all you hard work to release it! A real pleasure to be a part of such dedicated Team! Love all"
Tio
We will start to make these kind of posts about TROM-Jaro :) - from Gia:

"I use TROM-Jaro as my daily drive to browse the internet without being tracked by scripts (at least not as much), without seeing ads, and such. Less trades. I use it to translate TROM materials, read books, watch documentaries, or for digital drawing. https://www.tromjaro.com/" Chat image
Tio
Here are some websites labeled as trade-free by Alexio www.creativitate.cf www.energetica.cf www.echipamente.cf www.instalatii.cf - we need to make this idea spread and have other websites labeled as such.

Alexio's message:

"Some of the reasons to propose these websites as Trade-Free are:

- I am sharing/adapting on these webpages the trade-free content from the TROM projects;

- it is all about open education - http://is.jrc.ec.europa.eu/pages/EAP/OEREU.html ;

- the content on these websites is available under CC Attribution-Share Alike 4.0 International. "
Tio
I had to fix some stuff for trade-free.org - so now is 100% downloadable. Before it wasn't. Now even the font we use is downloadable. Also Roma added it to gitlab https://gitlab.com/tromsite/trade-free/trade-free.org - need to be updated with today's updates ;)
Aaron
We released the trade-free website in german: https://www.handelsfrei.org/ - very exciting :)
Tio
We are super close to move to Internet Archive, except the documentary for now. Rafa helps with migrating the subtitles ;) - I have also prepared a text document to explain how one can translate trade-free.org into their own language - I can't manage that myself....too much stuff to manage. So people have to take that as their responsibility and translate and host that page. I'll give them the tools. I may take 1-2 days of work, but is not super complicated. Still need to refine everything then will post on trade-free.org.
Aaron
The video + audiobook "valuable without a value" is now available in german on tromsite.com/de
Tio
We are super close to move all of our videos (except the documentary for now) to Internet Archive. Almost all are already uploaded there and I made a new video page for tromsite that is much much better. We will have better control now over the videos. This year we will give up Vimeo, moving away from trade-based services. We walk the talk boys! :)
Tio
Finally posted about TROM-Jaro on Manjaro forum https://forum.manjaro.org/t/trom-jaro-a-trade-free-manjaro-build/93739 and I will send info about it to others too - distrowatch and some youtubers.
Tio
Now musikwave.com, videoneat.com, tromsite.com, tromjaro.com and tiotrom.com are all labaled (at the bottom of each page) as Trade-Free :) - if you have a project that creates trade-free stuff then you can label it as such and link to https://www.trade-free.org/ - so we can start something interesting and useful. ;)
Tio
Big thanks to the "invisible" (or not so visible) people who support TROM on Liberapay https://liberapay.com/TROM/ and Patreon https://www.patreon.com/trom for the past years. Without them we could not have paid for servers, tools, and time to build TROM and its baby projects. So, thank you so much!
Tio
added trade-free to https://www.tiotrom.com/ and tromjaro.com too - tomorrow i'll add to musikwave and videoneat ;) + make a text to promote tromjaro!
Tio
added trade-free to tromsite.com and removed the links to any other external "thing" like fb or youtube and so forth, because tromsite is tromsite and i dont want to redirect people to these trade-based places :) - purism! hehe
Tio
Join tromcast here https://meet.jit.si/tromcasttradefree - password: tromtrade
Tio
fixed some stuff with tormjaro.com - kinda ready for promoting on manjaro forums and so forth! i am working to add "This is a Trade-Free Project" on all websites: musikwave, tromsite, tiotrom, videoneat, everywhere I can! :)
Tio
Next few days are dedicated to add a trade-free sign on all of my/our websites, linking to trade-free.org. Also fix some bugs with some websites + then I'll create a long post promoting tromjaro.com on open source forums/websites. Tomorrow TROM-Cast about trade-free.org and we'll also talk about TVP.
Tio
And thank you Willy for your monthly 10 Euros Paypal donation! ;)
Tio
Thank you anonymous donor for the 100 Euros donation! https://www.tromsite.com/donate/ :D
Tio
I wanted to post the bellow message on FB too but they won't let me: it is against their "community standards" they say. That's why I don't give much of a fuck about FB. Like we want to share our originally made videos on their platform and they won't let us....fuck them. Not all magnets are "illegal", you stupid!
Tio
And btw, the videos on trade-free.org are streamed from peer-to-peer. They are on no server! :) - they are on people's computers. If you want to help us, seed those videos:

The main video's magnet link is: magnet:?xt=urn:btih:bbe36cb6ba5d6afeb50421a2509a969786710e8e&dn=Trade-Free.mp4&tr=udp%3A%2F%2Fexplodie.org%3A6969&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Ftracker.empire-js.us%3A1337&tr=udp%3A%2F%2Ftracker.leechers-paradise.org%3A6969&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337&tr=wss%3A%2F%2Ftracker.btorrent.xyz&tr=wss%3A%2F%2Ftracker.fastcast.nz&tr=wss%3A%2F%2Ftracker.openwebtorrent.com

And the logo making video is: magnet:?xt=urn:btih:d66bf00649e889e261782c3a5098f68b68d35a00&dn=Trade-Free+Logo+Making.mp4&tr=udp%3A%2F%2Fexplodie.org%3A6969&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Ftracker.empire-js.us%3A1337&tr=udp%3A%2F%2Ftracker.leechers-paradise.org%3A6969&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337&tr=wss%3A%2F%2Ftracker.btorrent.xyz&tr=wss%3A%2F%2Ftracker.fastcast.nz&tr=wss%3A%2F%2Ftracker.openwebtorrent.com

Use WebTorrent to seed them, else it won't work. It is simple and you'll help us! Thanks!
Tio
thank you thank you thank you to all who have helped with this site: the idea, the suggestions, the everything!
Tio
https://www.trade-free.org/ is DONE! Check it :D - we will improve on it but this is basically it. We will add translations and we will create a trade-free directory soon :P - sooo so happy is finally here!
Tio
Yeah a bit more work on trade-free.org....but we are getting there....i dont want to get to stressed out by the work I have to do...VN also has some bugs that I need to fix. After we release TF website I need to manage my time better and have some more leisure time for myself and be less stressed out with this work. I am too passionate so is hard for me to not stress out cause I am excited and I want to fix and release!
Tio
Fixed some tromjaro.com stuff and need to fix some tromsite.com stuff....like plugins update that break parts of the website and you only notice after a few weeks....annoying
Tio
We are getting close to release trade-free.org. Is kinda done....maybe today will release. That website will for sure improve in time and we will do a documentary about "trade" that will be posted on the site. But we need to get it out there. We need to link to it. We need to talk about it ;)
Flingafu
I made a video on how to setup your own trade-free WIFI network! https://www.youtube.com/watch?v=h3dY7LmqwPs
Tio
Participate here https://meet.jit.si/tromcastvolunteer and pass is tromcastv
Tio
TROM-Cast about volunteering in 1 and a half hour https://youtu.be/xNNoNvmVMwE
Tio
New FAQ Submitted: Are some people born with genes or biology which makes them to have more chances to become smart in some certain things in life? - answer here https://www.tromsite.com/faq/
Tio
Ok so VideoNeat has some weird bugs that I had to force-solve...i have no idea why those bugs....is the theme apparently...anyways. Is back! We worked today a lot on trade-free.org. Ziad is such a great help. Great guy! Main thing I need to add to trade-free is the text (which is kinda done) and implement translations (which can be difficult). Also, seen we will have double the storage and power for our server where we store all of our websites. I'll let you know. Super excited!
Tio
For the past many hours I'm trying to fix some videoneat design weird bugs...dont know wtf happened there but some pages are broken...
Tio
Patreon: "MayBritt just edited their pledge from $30 to $50." - Thank you so so much for your ongoing support! https://www.tromsite.com/donate/
Seb
New TROM Poem out (made this one 3 years ago, crazy that it's been this long) - 'The Ugliness of Beauty': https://www.youtube.com/channel/UCG60mUXDirFbCT4SaumWpyA
Tio
Ok, so, I tested some video-decentralization options.

1. So, the ideal solution would be using webtorrent + amara in a popup. Meaning, videos to be streamed from our computers (p2p) and if we can to embed them with the amara player so all subtitles are added automatically. If possible to do that in a popup would be fantastic - click and play. Webtorrent streaming is working (we tested) - though there are some downsides...but manageable. This streaming option is the perfect one since we won't need any streaming service/hosting. We have 80GB of video files already and we are growing every week in size, so no hosting plan is enough for us in the long run. The problem is how to add it in a popup and deal with the subtitles....this is super problematic. As soon as we make trade-free.org work perfectly well with this technology we could implement to tromsite.com. Also, to videoneat.com (likely) - imagine people being able to click and play any torrent file from videoneat!!! That would be my dream!

2. Another solution is to use the video files directly (like mp4 or webm) from either our server or the internet archive servers. From our server is safer cause we have 100% control over them, but that would mean upgrading our server and will costs some money and these people are super charlatans - they say "oh you can upgrade to $40 a month for double the space and power"....however they don't mention that it means ONLY if you accept a 3 year "contact" with them for the plan....else costs a lot more....also am not sure how well the server will handle video streaming. Then we have to integrate to amara if possible....

3. Directly using the Internet Archive's player. Their player is not that great but is the easiest solution right now tho we will have to add the subtitles manually to it. It is the best vimeo replacement for sure.

Damn is a lot of work and is not that easy.... :( - and I have so much stuff to work on...
Tio
So today I got really pissed off at that plugin giving me the middle finger and I looked more and more into decentralizing trom videos. for now we will upload all trom materials to the internet archive and i will probably replace all videos on tromsite with those. Already did for the trailer on the homepage https://www.tromsite.com/ - it streams from the internet archive. Let's see if the url is persistent. I will do that first then try to stream from p2p. So far I love the IA.
Tio
aaand is back! we'll look into internet archive and the like for now...we'll see...so many things to do ....