The Knowledge Leak-Why we need AI asap?




Then comes the retirement day and whether these professionals slug it out to 65 or 70 they eventually want to take their well-deserved break. And all of a sudden all that experience is gone in an instant. As the balloons are popped and the ‘Best wishes’ cake is shared, we somehow allow a silent and devastating loss to take place. It is what we can call “The knowledge leak”.

Every day millions of people hopefully will learn something new and will become better at their job and in turn make an impact however small on the economy and society. We understand this ‘experience’ thing to have value, this is often reflected in business transactions. The experienced professionals command higher status and the salary that comes with it. It is not to say that exceptionally talented individuals with less experienced are any less valuable, it is to say that these exceptionally talented individuals are even more valuable when you add that additional ingredient of enduring practice.

In some professions, at least in some countries, an apprenticeship is required as part of obtaining the license to independently enter the workplace. In medicine there is the residency program, Lawyers have a similar notion and so on. In the past, apprenticeship often lasted longer and in some cases lasted a lifetime when the apprentice will eventually inherit the practice of their mentor. In this day and age, however, professionals know what they can command on the market and the token money they receive during the apprenticeship is significantly less than they need to live the lifestyle they worked so hard to achieve let alone pay back their student loans. Software engineers for example barely do any apprenticeship with the exception of these short internship stints that are less geared to contribute to their maturity but rather spike their interest to apply for a full-time job once they graduate. The industry can forgive some memory leaks and rigid designs as long as features are shipped and bugs are fixed in the next sprint.

Cognitive scientists and philosophers sometimes use “Implicit knowledge” to substitute for “Experience”. Experience is some kind of learning, a knowledge that is constructed through attention to detail and being able to combine and recollect memories, correlations, and causations. However, there is something mercurial about this kind of Knowledge. It is not something that can be easily captured into procedures or formulas like we can do for natural sciences. Even more, it is not only the formula that matters but how to arrive at it that may be of more value. Imagine for a second the ability to tap into Newton’s or Euler’s way of thinking. Would Fermat’s last theorem have taken 357 years to prove if we could think the way he did?

The business answer to the knowledge leak has been the so-called “Knowledge transfer”. Once someone is set to move on, a colleague would be assigned to go over their work and try to come up with questions for the parts that are not clear. Often times, the emphasis is on current status, open items, etc. Less often, the questions cover the ‘Why’ of the work. With notice periods sometimes as short as 2 weeks, you get back what you put in. In some industries, like software related industries retirement is not the major source of knowledge leak, it is the job hopping in pursuit to realistic raises to keep up with the real inflation.

Yet somehow this is all accepted as some sort of the price to do business, as some kind of a natural order of things. How can we wave the white flag so easy and let the wealth of our knowledge disappear into the void when it took us so long to create it?

We are at a time where socioeconomic pressures are mounting in order to provide more services to more people at ever lower prices (0 is about right). Between aging population on side of the planet, and unemployed youth on the other, what we need most are these highly skilled individuals that can contribute into building new generations of scientists, engineers, and musicians. So we have to come up with solutions on how to plug the knowledge leak and how to reproduce this knowledge to meet the growing demands.

Imagine this: When someone goes to see a doctor, they are usually asked some questions, perhaps pull up some previous charts or notes and then examine them and decide whether any further laboratory work is needed. A prescription is given, the medication is dispensed and a week or so later the person goes back for a checkup. Now, this may sound scientific enough but in fact, a lot of art goes into asking the right questions, listening when the patient is describing their conditions, generating hypothesis on what the case could be and so on and then deciding on a protocol to follow. In addition to what the doctor learns in medical school, they have to continuously update their knowledge of current research, results from clinical trials but also develop what can be described as ‘Affective Intelligence’, making a connection with their patients so that information flows through a conversation. The better the doctor becomes at combining these aspects the more they are sought after by new patients. It sometimes takes weeks or months to book an appointment with those clever doctors. So imagine if we can actually record these sessions, the questions, and the answers, equip them with digital stethoscopes and other devices to record the necessary metric. Ask about their lifestyle and their nutrition and observe how they respond to the prescribed medication.

We can combine all of this information, along with the response of the clever doctor to build models which would capture that doctor’s knowledge. What is more significant is this, We don’t have to wait for that clever doctor to retire to apply their knowledge. We can immediately expand it and reproduce it at 0 marginal cost and offer it to those who need it. Imagine that: Knowledge finally freed from time and space. Imagine what that means to the throughput of the healthcare system or the improvements in quality of life across the globe.

Another really ailing system is Education. Schools are being shut down, classrooms are overcrowded, and quality higher education is a privilege available only to a few. Teachers, somehow not considered highly skilled in most immigration systems, are underpaid and often find themselves forced to switch to a career in marketing or communication just to stay afloat. Online education services are definitely a step in the right direction, but there is plenty of room for improvement. Now imagine we could capture the skills of the best teachers into models. That way we could have personalized tutors to support both the exceptional and the struggling students. No more would one curriculum be forced on an entire generation, but rather adaptive courses to enhance or patch the students’ knowledge as needed. Some of the great schools throughout human history were built around these very concepts of apprenticeship and tutoring. They were highly selective as a matter of necessity. We don’t have to be, this is after all the best effect technology brings to humanity. Lower the barrier to entry and empower our abilities to scale.


Why haven’t we built this “knowledge multiplier” thing already? It is because this kind of knowledge combines many features that are hard to program with our current tools. There is a high level of uncertainty. Uncertainty is like a dark monster the software industry tries very hard to keep in a closet. Uncertainty is hard to express, it is hard to code for and it is hard to test. Perhaps even above all, it is not something people want to pay for. Uncertainty is not the only challenge, implicit knowledge is difficult to formalize. It is an embodiment of what we humans learn over our lifetime or maybe even over generations. There were some attempts to build so-called expert systems, but with the exception of some narrow (but lucrative) domains they never really took off because they were often too brittle to fit in domains where there is a need to incorporate and update information to use in the decision process.

Today, most of our programs do one or more of 3 things: 1-calculate, some mathematical computation whether to balance an account or to enhance an image or a video. 2-Memorize, save some bits and recall them as needed be that a picture of a dog or a report. 3-communicate, as in transmit and receive some bits between some connected devices.

What they don’t do are 2 things: 1-interpret, as in render these bits they are moving, storing or calculating into anything that has a semantic value. 2-Reason, as in combine these semantic values from the interpretation step to build some kind of an understanding of the context, to validate their interpretation or to make decisions and recommendations on how to affect the context.

So what we need to do now as an industry is to shift our focus from using machines as glorified digital archives and big expensive calculators to integrating them into our physical world and giving them a way to understand and represent the context. We need to do this in order to be able to build the models which will capture the knowledge of our scarce professional population. We need to expand the reach of these professionals to meet the demand brewing in the market now.


This is the trillion dollar question. There are many kinds of models, but let’s assume we are talking about models which are taught, whether explicitly or through some kind of feedback by a more knowledgeable entity(i.e: humans for now). With that definition a model M = (a+e)(d), that is it is the combination of the function of an algorithm, with the function of experience over some data. In some sciences, data is not necessary, or rather easy to generate but in others, data is the most essential component. This certainly depends on the ‘algorithm’ in the sense that algorithms that are data hungry like deep learning, would make the data needed for their model generations more valuable and we can use this reasoning to derive a reasonable approximation of the utility of both the data and the experience. The experience, in this case, is the feedback or guidance offered by the more knowledgeable entity in order for the algorithm to produce a sufficiently accurate model.

So then, the data has value and the experience has value and the algorithm has value. Since the data has value, those who generate it should participate in the upside of generating the models. It is obvious the experienced professionals should also participate. A lot of activities around AI today is about trying to crowdsource and capture data as cheaply as possible. The exchange seems unfair in my opinion and I hope there will be a genuine effort to undo this digital heist. The experienced professionals also seem prone to the same scam, albeit with a slight twist. For the experienced professionals, some higher value is offered whether in direct monetary terms or by streamlining and increasing their operational efficiency by making their practices more profitable. This is still unfair in my opinion because when a model is created and can be reproduced at 0 marginal cost the economic benefits are vastly disproportionate to whatever peanuts was offered initially. Calculating the utility of both data and experience is a complex subject and I will expand on this in a future post.


This is going to sound like a fallacy now. I don’t mean that cloud computing is going to die. I think it will continue to exist for a very long time, but I also believe that at least within successful businesses leadership there is a growing understanding of the value of proprietary data. If AI is the new battlefield, data is the ammunition and it is fetching a premium. So I believe that there will be a stronger emphasis on doing hybrid architectures of having some data in the cloud and some data on premise. In some industries, this is the norm and will be for a while. The other area of interest will be in new cryptographic research where models are still built by a 3rd party but interpretable only by the owners of the data along the lines of homomorphic encryption and so on.


Yet another way to bring AI forward faster is to empower the users, both the producers of the data and the experienced professionals. Instead of playing games around capturing data and experience, why not just create the necessary tools and lower the barrier to entry for using them so that people can create these models themselves. Instead of having 5 big companies with big powerful AIs, we can have millions of AIs created by people who invested their lives to be good at what they do. They can directly reap the economic benefits, by offering their so-called ‘agents’ on a marketplace. In fact, their agents would be a property that can be inherited by their survivors. We would have plugged the knowledge leak and empowered individuals to directly impact the world. Now contrast this idea with Universal Basic Income.

amardeep kaushal

Blogger, Marketer & Data Analyst.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.