Links of the week

Balda_20160410_X100s_0059
Ski-mountaineers climbing the last steep meters to the summit of the Bishorn (4153m) in Valais, Switzerland

Time And Tide Wait For No Economist – UNLIMITED
The changing market of time and how the leisure time gap is widening between skilled and unskilled labour.

The Simple Economics of Machine Intelligence – Harvard Business Review
AI-based prediction tasks will get cheaper and cheaper, but the value of still-to-be-automatized complementary tasks, such as judgement will increase. A simple, but effective, economic perspective on the impact of AI.

Do you need a Data Engineer before you need a Data Scientist? – Michael Young
How Data Engineer and Data Architects can make your Data Science team more effective and satisfied.

The Art of the Finish: How to Go From Busy to Accomplished – Cal Newport
How task-based planning makes you productive, but not accomplished. A simple strategy to change that.

Data Science jargon buster – for Data Scientists – Guerrilla Analytics
Do your data scientists confuse your customers. Here’s a useful translating table.

 

Book review: So Good They Can’t Ignore You by Cal Newport

Balda_200608_Trek_040

After having read Deep Work, been a follower of Study Hacks, and checked the Top Performers course (yet to take it though), I was curious to read Cal Newport’s book about career advice: So Good They Can’t Ignore You.

It does punch like the title, delivering immediately actionable advice on how best to steer, improve and leverage your career to get your dream job.

How does it all play out then? By following four simple rules (and corollary laws).

Rule #1: Don’t follow your passion.
First of all, we very rarely know what our passions truly are. It’s more the norm to become passionate about something we do really well. Secondly, passion is dangerous, since it can lead you to jump onto options for which you do not have the necessary skills. Thirdly, by trying to follow your passion, you end up assessing each job opportunity according to what it offer you, instead of what value your are producing.

Rule #2: Be so good they can’t ignore it (or, the importance of skill)
One needs to develop rare and valuable skills, a career capital, in order to trade them for better and better jobs. These skills are best acquired via the craftsman mindset, “a focus on what value you’re producing in the job” and through deliberate practice, “an approach to work where you deliberately stretch your abilities beyond where you’re comfortable and then receive ruthless feedback on your performance.” (more on this in Deep Work).

Rule #3: Turn down a promotion (or, the importance of control)
So now that your have built up your career capital, what do you trade it for? One of the most powerful traits to acquire is control over what you do, and how you do it. Deciding how much and where to work. Control has it traps though.
The first control trap states that “control that is acquired without career capital is not sustainable.”
The second control trap is that “the point at which you have acquired enough career capital to get meaning control is exactly the point when you’ve become valuable enough to your current employer that they will try to prevent your from making the change.”
In order to avoid these traps, one should follow the Law of financial viability, which briefly states that you should always check your desired changes against the willingness of people to pay for it.

Rule #4: Think Small, Act Big (or, the importance of mission)
Another fundamental source of satisfaction of your work is having a mission, but finding such a mission is not an easy task. Like control, mission also requires career capital: having a clear defined mission but no skills to carry it out will only leave you unsatisfied and looking for another job to pay for your bills. Ok. You’ve got the necessary skills, but still lack a driving mission. How do you find it? Cal argues that great missions are found in the adjacent possible of your field, meaning you first need to become an expert to spot new fruitful directions. Exactly like in science. Great discoveries are found at the edges of the current knowledge. Good. You found a possible direction. Do you jump head on into it? No, you take small bets in many of these direction, in order to probe what’s truly feasible, and also remarkable. A small bet is transformed into a compelling mission and then into a great success if it satisfies the law of remarkability, “which requires than an idea inspires people to remark about it, and is launched in a venue where such remarking in made easy.” Example? Intriguing scientific discoveries in peer-review journals and innovative software in open-source GitHub repositories.

That’s a quite concise summary of the book. In order to dig deeper into the arguments behind these rules and laws, and read many peoples’ stories, successful and not, you ought to read the whole book. At 230 pages at large font is a fast read, but you’ll come back to some chapters multiple times, to adjust your understanding to your current career situation.

Personally, I found the advice clear, which is not always the case, sound, which is even less so, and immediately applicable. Overall, what’s best about the book is that it frames career development and finding the dream job in very practical and no-nonsense terms.

Buy it here.

Agent-Based Model Calibration using Machine Learning Surrogates

My friend Amir just sent me his latest paper on combining machine learning surrogates, specifically, extremely boosted gradient trees (XG-boost), and active sampling, to explore the parameter space and to calibrate agent-based models. This new approach allows for a much faster exploration of the parameters to identify regions for good calibration against real-world data. It also provide a measure of the relative importance of each parameter.

surrogate_modeling

Abstract

Taking agent-based models (ABM) closer to the data is an open challenge. This paper explicitly tackles parameter space exploration and calibration of ABMs combining supervised machine-learning and intelligent sampling to build a surrogate meta-model. The proposed approach provides a fast and accurate approximation of model behaviour, dramatically reducing computation time. In that, our machine-learning surrogate facilitates large scale explorations of the parameter-space, while providing a powerful filter to gain insights into the complex functioning of agent-based models. The algorithm introduced in this paper merges model simulation and output analysis into a surrogate meta-model, which substantially ease ABM calibration. We successfully apply our approach to the Brock and Hommes (1998) asset pricing model and to the “Island” endogenous growth model (Fagiolo and Dosi, 2003). Performance is evaluated against a relatively large out-of-sample set of parameter combinations, while employing different user-defined statistical tests for output analysis. The results demonstrate the capacity of machine learning surrogates to facilitate fast and precise exploration of agent-based models’ behaviour over their often rugged parameter spaces.

 

Links of the week

Balda_20120325_D700_3741Sunday morning in Brick Lane – March 24th, 2012

Flow doesn’t lead to mastery – Scott Young
While many seek a state of flow as way to deliver one’s best, research on deliberate practice shows that one has to go beyond flow, to a state of high and uncomfortable intensity in order to achieve mastery.

What do Hiring Managers Look For in a Data Scientist’s CV? – Ben Dias
The title explains it all. A great post to read if you are applying for a data scientist position, or are hiring data scientists.

The Obvious Value of Communication is Perhaps Not So Obvious – Study Hacks
Hyperconnected offices (email, slack, smartphone) may just be a very poorly designed distributed system. How can we improve that?

Book Review: The Wisdom of Insecurity – Scott Young
Scott writes a wonderful, if not a tad too long, review of The Wisdom of Insecurity by Alan Watts, expounding Watts’s view of Zen philosophy. I particularly enjoyed how Scotts blends summary with his own opinions. To learn from for improving my own reviews.

Review: A Treatise on Probability by John Maynard Keynes

Keynes_TreatiseProbabilityA Treatise on Probability by Keynes is a very important book, one of its kind, in setting the philosophical and logical foundations of probabilistic reasoning.

Firstly, Keynes address key philosophical questions about the nature of probability, its interpretation and its measurement. Keynes’s conception of probability is that it is a strictly logical relation between evidence and hypothesis, a degree of partial implication. Furthermore, probability is not necessarily numerical and often it is impossible to compare degrees of probability. Numerical probability which allows precise quantification and comparison is a special case.

Secondly, he establishes a rigorous logical framework, along the lines of Russel and Whitehead’s approach to the foundations of mathematics in the Principia Mathematica.

Thirdly, inductive reasoning and the role of analogy are dissected in the established new perspective.

Fourthly, some general semi-philosophical questions are addressed with the new probabilistic understanding.

Finally, the scene is ready for Keynes to delve into statistical inference and elucidates the flaws in the methods proposed so far. While he dismantles the “just compute” approaches, he also presents a constructive alternative, based on comparing multiple, and not just one, series of events.

I wish I had kept notes while reading the book, and wrote this review in a stage-wise process, so that I could comment more deeply about key passage. Nonetheless, I believe the Treatise will accompany me for many years to come, as it contains such lucid and insightful arguments about what we should mean when we talk about probabilistic reasoning.

Links of the week

Balda_20040911_OasiSantAlessio_22Couple of Flamingos at the Oasi Sant’Alessio Natural Reserve, Pavia, Italy.

The Black Magic of Deep Learning – Tips and Tricks for the practitioner – EnVision
A host of tips for properly training deep neural networks.

Prophet: forecasting at scale – Facebook research
Facebook research releases the open-source forecasting Python and R package Prophet. Maybe an ambitious name, but worth trying it out.

Inside Facebook’s AI machine – Backchannel
A peek inside Facebook Applied Machine Learning division, where Machine Learning is democratized to be used by all engineers in no time.

The Architects Of Time – Unlimited
Philosopher A.C. Grayling ponders the ultimate question: what is time? And offers this advice: it is never too late to stop wasting it.

Links of the week

How Einstein Learned Physics – Scott Young
Scott digs into Einstein’s biography to reveal how the genius approached learning.

Domain Knowledge in Machine Learning Models for Sustainability with Stefano Ermon – TWIMLAI
An excellent podcast about machine learning for sustainability and how to incorporate domain knowledge into models.

Wrestling With A Future Of Two-speed Time – Unlimited
An overview of the times to come with respect to perception and use of time.

Discarded Hard Drives: Data Science as Debugging – Inverse Probability
Neil Lawrence shares his view on how to frame computer scientists to tackle data science project.

The Best Data Scientists Get Out and Talk to People – Harvard Business Review
Most the of useful information is not digitized, so a good data scientist needs to talk his way to it.

Will Democracy Survive Big Data and Artificial Intelligence? – Scientific American
A comprehensive, multi-essay, investigation on the future of society and the impacts of the digital and AI revolution.

Romain Mader
Young and funny Swiss photographer Mader receives the Foam Paul Huf Award.