Leadership Tales Part 1: Pran

Having worked in the high-technology world for 24 years, often in leadership roles, I think I know a little about leadership. So, I felt I should write about some of my experiences with other leaders. True leadership is extremely important not just for inspiring people to do their best, but also for people starting out in fresh careers, where they need to see very good examples so they become good leaders themselves eventually.

I was very lucky to have Pran as my first ‘real’ manager. For a short while in my first job, I reported to his manager, but very quickly the team was reorganized and Pran became my manager.

Pran was an amazing character. He was the president of the Silicon Valley Indian Professionals Association (SIPA) which he was busy galvanizing into a very active organization that brought a lot of brilliant people together to do great things in the technology world.

He was a prolific writer, and has three books and several magazine articles to his name. I co-wrote a couple of these articles in EE Times, EE Design and Integrated Systems Design magazines. I reviewed a few chapters in his first book on EDA Design methodologies, and got my name mentioned in the foreword as a result.

https://blog.shunya.net/.a/6a00d8341dd33453ef01b7c89014d1970b-800wi

Over the course of my three years at Cirrus Logic, he always got work done from me in such a way that I never felt it was work. The office seemed a very fun place to go. I particularly remember the time when I came back from a photography workshop in the Mojave Desert and got my slides processed (this was back in 1996, when film was still a big thing). I brought the slides to the office, and when I had a bit of free time there, I used it to see which slides I should keep and which I should throw. Pran joined me and even gave me advise on what he thought was worthy of keeping. Compare that to typical managers in the technology world who would frown if they saw any of their employees doing anything other than office work in the office. Because of these kinds of things he did, I felt very loyal and always gave him my best work.

In 1997, Pran left Cirrus to create his own company – it was called ByteK at that time. After a while I left Cirrus too and joined him at ByteK, the lure of working at a startup being very strong. I had a fantastic time there – I was involved in so many things, from setting up the company’s internal infrastructure, to mentoring new college grads and consulting to raise money. Pran introduced me to many new concepts such as ‘knowledge management’, which I confess I didn’t understand at that time. I was at ByteK for only a year, but I did so many things – a tool to automate log reporting with HTML output, learning Chrysalis formal verification, Viewlogic board design and Synopsys .lib to teach courses in it, taking over Pran’s course in Logic Design at the UCSC Extension in Santa Clara, creating an online course framework in Perl and PostgreSQL with Apache server, consulting at Synplicity to set up a test framework in Perl, etc. etc.

But the way I left ByteK was not very good. Pran was travelling and away for a while. I had a strong disagreement with another of the three founders and at the same time, the folks at Synplicity made me an offer because they liked my work. One day, after a huge fight with this other founder, I just left ByteK. Pran was very hurt – when he came back, he told me I should have waited to at least just talk to him. I felt bad, I had let him down, but the deed was done.

After that we did not meet much for several years. Then in 2008, almost 10 years later, I accidentally ran into him. He had an office in a building where Suhas Patil was running a startup he had created with Cirrus people. I was there because Suhas had a photo studio in the same building, from where his wife Jayashree was running Nirvana, a fashion magazine startup, and I was doing some work for them. Pran was still running the same company – a lifestyle startup now called Vitalect. He had built on the online learning framework. He was now the only founder left, running the business from the US while his engineers were all in Trivandrum, India. He had a small team, but everyone was happy and doing well.

We chatted for over an hour and realized we should have stayed in touch better. We resolved to do so, but it would be another 7 years before I met him again. I don’t remember why I reached out, but in late 2015, I did. We met for lunch at Old Ironsides Cafe in Santa Clara and talked about many things. He told me about his blog “Pakora Corner’. He was still running Vitalect, and publishing – same old Pran. He said he had had some bad health issues, but didn’t elaborate. I didn’t press either, I felt he didn’t want to talk about it. I was at Brocade then, and he asked me about some people there that he wanted to contact. Early in 2016 I called him to tell him that I could put him in touch with someone at Brocade.

He said he was busy with a bunch of things, so we should meet sometime mid-year. I contacted him again in July. He was in India – he had written a book about his friend and was in India launching it. So, we should meet later after he got back in a couple of months.

I clearly remember that weekend in September 2016. It was a Sunday afternoon, the 4th. I was at home, doing nothing much. I got a text from my friend Vishal, he said: do you know Pran? I replied yes of course, very well, why? He said Pran had died the previous day. He was in India and had a heart attack. I was shocked beyond belief. We were supposed to meet after he got back. Slowly but surely, I had wanted to build back the relationship we had had before I left ByteK. Now that would never happen. He was only two years older than I was.

I quickly called up a few of our common friends and told them. Everyone was just as shocked as I was. We started a thread on LinkedIn, and after a few days, a bunch of ex-Cirrus folks met at Aqui in Cupertino to remember Pran. It was quite nostalgic, everyone had something unique and good to say about him.

After Pran, I have had many many managers, I have worked at several companies, small and large. But I have never had anywhere close to as fulfilling a time as I did working with him. To a large extent, he is responsible for my professional success and I owe him an immense amount of gratitude – I only wish I had the chance to tell him that.


Why Machine or Deep Learning, and How to Choose

Two incidents happened recently, which prompted me to write this article. First, I was writing a proposal for a potential partner for my startup, on how we could partner with them to bring AI into their products and applications. They then turned around and asked us why they should want to develop AI applications in the first place.

Second, I was chatting about my startup with an investor friend. After he understood that we did a platform to accelerate the adoption of AI techniques by enterprises, he told me: “Shekhar, you are two steps removed from the problem. Enterprises today don’t even understand what AI is, how it differs from traditional techniques. They need to get this first.”

Of course I knew this, and had been talking about it, but I realized I needed to put it down. Now, a real comprehensive reasoning will be customized to the specific customer and the problem. I do have some general points however and here they are.

Analytics has been and can be done without AI of course. In many cases, the non-AI approach can be severely limiting, for the following reasons. This is not intended to be an exhaustive list, it only contains some of the main reasons why many enterprises are moving to AI models for data analysis.

  • The traditional data analytics approach is reactive. Data is collected from various sources, and analyzed with tools that display it in various ways, with dashboards, graphs, logs, etc. The patterns that these tools look for are based on existing knowledge. This is akin to monitoring, and the response to triggers requires human intervention anytime something new is seen. With AI, the responses are proactive. AI is used to predict the triggers, and automatically apply the remediation, hence it is a further level of automation with a significantly reduced need for human intervention.
  • AI makes prediction decisions based on past and current input of data into the model. The model learns from the data input, and the more realistic data is used, the better it learns. This then improves the predictions and decisions the model makes – so, a prediction made in the past would generally be different than a prediction made later, even with the same data input, because in the time in between, the model has learnt more and has become more accurate in its predictions. In the case of non-AI predictive models, the same data input always results in the same predictions irrespective of when this happens, since the model is not learning.
  • Traditional models were designed to work with less data. Nowadays, every organization collects large amounts of data that were not possible before. The newest deep learning algorithms perform better with an abundance of data which was not the case before. The amount of data used for these techniques would often be overwhelming for traditional techniques.
  • With unsupervised learning, the newer AI techniques can detect patterns in the data that would not be obvious to humans. For example, recently it was determined that the plague or Black Death in 14th century Europe was more likely spread by humans via lice, rather than rats as was earlier thought. This was determined by simulating outbreaks in various cities with different models (rats, airborne, fleas/lice) and finding out which fit best. If the data had been fed to a deep learning model with unsupervised learning, it would have discovered patterns that led to the conclusion faster, without the need for simulations.

Within the broad category of AI, there are multiple categories of learning. The latest, and the one most cited these days is Deep Learning. The main difference between deep learning and traditional machine learning is in the classification of features required to solve the problem. With traditional machine learning, the feature classification is done manually. With deep learning, the system figures out which features are important for making the prediction and automatically evaluates them for making decisions.

For example, if our problem is weather prediction, and we have collected data on various weather phenomena from the past. To be more specific, let’s say we want to predict the likelihood of a natural fire in wooded areas. We may define the features required for this prediction are: humidity, temperature, the density of trees in a given area, among others. For traditional machine learning, we would need to create algorithms that predict the possibility of a fire as a function of these features. A deep learning algorithm would automatically figure out which features are needed to make a prediction. Suppose we added a feature such as the population of rabbits in the given area. Then, the traditional machine learning model would use this data for its prediction if it were programmed as such, however, the deep learning model would recognize from its training that the rabbit data is not relevant to the prediction and would not evaluate it.

Deep learning also works better with the amount of data fed to it for training. Traditional models taper off after a while, as seen from the graph below. Hence this is a good technique to use in cases where a lot of training data is available.

Some good examples of where to use deep learning is in threat detection where there are many samples of attacks (such as network intrusions) or predictive maintenance for IoT devices, where a lot of failure data is available (Hitachi uses this on their remote earthwork machines).

This also means that in the case where very little training data is available, deep learning is not a very good technique. An example is spearphishing attacks, where the number of successful attacks is very small, hence there is not much training data available for deep learning models, and the number of false positives is so high that this is not a good technique for predicting these specific types of attacks (see this paper on spearphishing).

Here is a cheat sheet for deciding what type of machine learning technique to use for a given problem. The original has links for each individual technique, to explore more details on that technique, and can be found here.