Links of the week

Balda_20051002_Amiata_08.jpg
Morning mist rolling through beech forest in Monte Amiata, Val d’Orcia, Tuscany, Italy.

Conscious exotica: From algorithms to aliens, could humans ever understand minds that are radically unlike our own? – Aeon
A philosophical attempt to map minds other than human, with implications to what it means to be conscious. Is consciousness an intrinsic, inscrutable subjective phenomenon or a fact of matter that can be known? Read on.

Crash Space – Scott Bakker
What would happen if we engineered our brains to be able to tweak our personality and emotional responses as we experience life? What would life look like? Scott Bakker gives us a glimpse in this short story.

AlphaGo, in context – Andrej Karpathy
A short, but comprehensive explanation of why the recent AlphaGo victories do not represent a big breakthrough in artificial intelligence, and how real-world problems differ, from an algorithmic point of view, from the game of Go.

Multiply or Add? – Scott Young
In many business and personal projects, factors multiply, meaning that the performance you get is heavily influenced by the performance of weakest factor. In some other cases, e.g., learning a language, factors add. The strategy to take in developing factors/skills depends by which context, add or multiply, you’re in. For more insights, read the original article.

Human Resources Isn’t About Humans – BackChannel
Often, HR is not there to help us or solve people’s problems, it is just another corporate division with its own strict rules. But it can be changed for the better. Read on.

Living Together: Mind and Machine Intelligence

Balda_20120325_D700_3739.jpg

Neil Lawrence wrote a nifty paper on the current difference between human and machine intelligence titled Living Together: Mind and Machine Intelligence. The paper initially appeared in his blog, inverseprobability.com, on Sunday, but was then removed. It can now be found on arXiv.

The paper comes up with a quantitive metric to use as a lens to understand the differences between the human mind and pervasive machine intelligence. The embodiment factor is defined as the ratio between the computational power and the communication bandwidth. If we take the computational power of the brain as the estimate of what it would take to simulate it, we are talking of the order of exaflops. However, human communication is limited by the speed at which we can talk, read or listen, and can be estimated at around 100 bits per second. The human embodiment factor is therefore around 10^16. The situation is almost reversed for machines, a current computational power of approximately 10 gigaflops is matched to a bandwidth of one gigabit per second, yielding an embodiment factor of 10.

Neil then argues that the human mind is locked in, and needs accurate models of the world and its actors in order to best utilize the little information it can ingest and spit out. From this need, all sorts of theories of mind emerge that allow us to understand each other even without communication. Furthermore, it seems that humans operate via two systems, one and two, the fast and the slow, the quick unconscious and the deliberate self, the it and the I. System one is the reflexive, basic, biased process that allows us to survive and take rapid life-saving, but not only, decisions. System two creates a sense of self to explain its own actions and interpret those of others.

Machines do not need such sophisticated mind models as they can directly and fully share their inner states. Therefore, they operate in a very different way than us humans, which makes them quite alien. Neil argues that the current algorithms that recommend us what to buy, what to click, what to read and so on, operate on a level which he calls System Zero, in the sense that it boycotts and influences the human System One, exploiting its basic needs and biases, in order to achieve its own goal: to give us “what we want, but not what we aspire to.” This is creating undesirable consequences, like the polarization of information that led to the Fake News phenomenon, which might have had a significant impact on the last US elections.

What can we do? Neil offers us three lines of action:

  1. “Encourage a wider societal understanding of how closely our privacy is interconnected with our personal freedom.”
  2. “Develop a much better understanding of our own cognitive biases and characterise our own intelligence better.”
  3. “Develop a sentient aspect to our machine intelligences which allows them to explain actions and justify decision making.”

I really encourage you to read the paper to get a more in-depth understanding of these definitions, issues and recommendations.

Links of the week

Balda_API0004.jpg

Using Machine Learning to Explore Neural Network Architecture – Google
Designing Neural Network Architectures using Reinforcement Learning – MIT
How neural networks can generate successful offsprings and alleviate the burden from human designers using reinforcement learning.

Data as Agriculture’s New Currency: The Farmer’s Perspective – AgFunder News
A classification of three types of agricultural data and how they related to the farmer’s needs.

The AI Cargo Cult: The Myth of a Superhuman AI – Kevin Kelly
The founding executive editor of Wired explains why he believes superhuman AI is very unlikely. Instead, we already see many form of extra-human new species of intelligence.

Everything that Works Works Because it’s Bayesian: Why Deep Nets Generalize? – inFERENCe
Finally, Bayesian can also say that they can explain why Deep Learning works! Jokes apart, this article overviews several recent useful interpretations of Deep Learning from a Bayesian perspective.

Links of the week

Balda_P0030.jpg
Arches onto high cliff over the Mediterranean. Portovenere, Italy.

Deep Habits: The Importance of Planning Every Minute of Your Work Day – Study Hacks
How to increase your productivity by taking control of your time via time blocking.

Chaos, Ignorance and Newton’s Great Puzzle – Scott Young
Luck, chaos or ignorance? Understanding this mixture for your projects may help to better allocate resources.

Garry Kasparov on AI, Chess, and the Future of Creativity – Mercatus Center
A very interesting conversation with Garry Kasparov on chess, AI, Russian politics, education and creativity.

If everything is measured, can we still see one another as equals? – Justice Everywhere
The dangers of measuring everything and ranking ourselves on different scales, neglecting those human skills and experiences that cannot and should not quantified.

Links of the week

Balda_20160410_X100s_0059
Ski-mountaineers climbing the last steep meters to the summit of the Bishorn (4153m) in Valais, Switzerland

Time And Tide Wait For No Economist – UNLIMITED
The changing market of time and how the leisure time gap is widening between skilled and unskilled labour.

The Simple Economics of Machine Intelligence – Harvard Business Review
AI-based prediction tasks will get cheaper and cheaper, but the value of still-to-be-automatized complementary tasks, such as judgement will increase. A simple, but effective, economic perspective on the impact of AI.

Do you need a Data Engineer before you need a Data Scientist? – Michael Young
How Data Engineer and Data Architects can make your Data Science team more effective and satisfied.

The Art of the Finish: How to Go From Busy to Accomplished – Cal Newport
How task-based planning makes you productive, but not accomplished. A simple strategy to change that.

Data Science jargon buster – for Data Scientists – Guerrilla Analytics
Do your data scientists confuse your customers. Here’s a useful translating table.

 

Links of the week

How Einstein Learned Physics – Scott Young
Scott digs into Einstein’s biography to reveal how the genius approached learning.

Domain Knowledge in Machine Learning Models for Sustainability with Stefano Ermon – TWIMLAI
An excellent podcast about machine learning for sustainability and how to incorporate domain knowledge into models.

Wrestling With A Future Of Two-speed Time – Unlimited
An overview of the times to come with respect to perception and use of time.

Discarded Hard Drives: Data Science as Debugging – Inverse Probability
Neil Lawrence shares his view on how to frame computer scientists to tackle data science project.

The Best Data Scientists Get Out and Talk to People – Harvard Business Review
Most the of useful information is not digitized, so a good data scientist needs to talk his way to it.

Will Democracy Survive Big Data and Artificial Intelligence? – Scientific American
A comprehensive, multi-essay, investigation on the future of society and the impacts of the digital and AI revolution.

Romain Mader
Young and funny Swiss photographer Mader receives the Foam Paul Huf Award.

Links of the week

AI: Its nature and future

AI: Its nature and future is a little book by Margaret Boden, research professor of Cognitive Science at the University of Sussex. It is a quick (too quick?) overview about the history of artificial intelligence (AI) from the first symbolic reasoning systems to the more recent recursive deep neural networks. Boden discusses philosophical and social implications of AI advances and also delves into the hotly debated singularity idea. Boden is a self-declared Singularity-skeptic, but that doesn’t prevent her from acknowledging the threats that AI could pose to society in the near future. I wish she had gone deeper into her arguments, to better motivate her position and offer a clearer understanding of the topics covered.

Artificial Intelligence links of the week