T-shaped vs V-shaped path in your Analytics career

We start with learning multiple disciplines in an industry and then niche down to a specific skill that we master over the period of time to get expertise and become an authority in that space.

Typically, many including me follow a T-shaped path in the career journey where horizontal bar ‘T’ refers for wide variety of generalized knowledge / skills whereas vertical bar ‘T’ refers to depth of knowledge in a specific skill. For instance, if you’re a Data Scientist, you still do minimal Data Pre-Processing steps before doing the Exploratory Data Analysis, Model Training / Experimentation and Selection based on evaluation metrics. Although a Data Engineer or a Data Analyst, primarily works on data extraction, processing and visualization, a Data Scientist might still need to be familiar in order to get the job done on time without depending on the other team members.

Data Scientist’s vertical bar ‘T’ refers to crafting the best models for the dataset and horizontal bar ‘T’ could refer to Data processing (cleaning, transformation etc.) and visualizing the KPIs in the form of insights for the business to take informed decisions.

Strategy & Leadership consultant and author, Jeroen, comes up with a V-shaped path which makes sense in our contemporary economic situation where layoffs news are on the buzz across many MNC companies.

In terms of similarities, the author, reiterates that both models address the fact that understanding one focus area deeply and having shallow knowledge across other areas. V-shaped model refers to having one deep knowledge and a lot of adjacent knowledge areas which are not deeper but not shallow either, somewhere in between. Jeroen describes as, “It is medium-deep, medium-broad, enabling us to be versatile and agile.”

For illustration, if the Data Scientist aspires to go above and beyond the expectations, he/she can technically collaborate with Data Engineers, performs AI/ML modeling stuffs, builds reports/dashboards, generate meaningful insights, and enable end-user adoption of insights. It has a combination of hard and soft skills! Soft skills such as storytelling, collaboration with peers, project management etc., Over the period of time, as one repeats this whole process, they can get better and better (develop deeper knowledge) with model development and management, and develop adjacent soft skills to excel at work.

In my view, I think, we start with a T-shaped path and eventually, it morphs into a V-shaped career path as we put our hard-work on one skill and also develop its associated adjacent skills. And, it applies to any field that you’re in.

How long do you think it would take this transformation to attain a V-shaped path? Will this take about 10,000 hours (~a decade) as per Gladwell’s book: “Outliers” to become an expert? Maybe, yes! Sooner, the better it is!!

I’ll leave you with a three-phase approach to becoming an expert according to the author Jeroen.

Image Credits: https://www.linkedin.com/in/jeroenkraaijenbrink/

How people spend their time?

In this fast-paced world, do we take a pause and retrospect where our significant part of our time actually goes into, and who we spend the most of our times over the course of our lives? I think it’s important for us to ponder over these and consider taking some corrective actions depending on our life’s priorities.

I am sharing these interesting insights which I got from a couple of sources.

Our World in Data has published how people spend their average time per day by comparing the data across a few selected countries. The dimensions used to compare are Work, Sleep, Eat, Other leisure activities.

  • China puts in 2x more work hours compared to countries such as Italy, and it presents a high correlation between work & sleep in a way that the people in China dedicates more time for sleeping than any other countries listed above.
  • Countries like Italy, Finland, Norway, Denmark, Germany, Belgium indulge in more leisure activities than other countries
  • People in USA and India consider to be sleeping more than the average for 8 hours and 48 mins. It’s surprising for me to see India emerged as the top in this data point! South Korea sleep the least as per the list
  • France, Spain, Italy, Greece appear to be spending more time in eating & drinking whereas USA is the least

A general pattern found is that the people in rich countries afford to work less and spend quality time with leisure activities. There is a strong correlation with happiness index as well which signifies people spending quality, leisure time are happier than other countries who spend less time on leisure. For instance, Finland has been honored as the happiest country in the world and their people spend more time in leisure activities.

While these insights are the country-level, I want to refer to an another source which happens to be a Twitter thread of Sahil Bloom. He summarized some key insights on who we spend our time with over the course of our lives. The source data corresponds to American Time Use Survey published in Our World in Data

  • Time spent with Family

As we grow from toddler to adult, we move places for work, settle across different cities & countries. This graph shows clearly that we spend lesser number of time with our parents and siblings. I can’t disagree with what Sahil beatifically mentions, “Prioritize and cherish every moment”. If you get a chance to spend your whole life with your parents, consider yourself lucky as many people are not privileged for various reasons.

  • Time spent with friends

Getting true friends has become rare these days. Again, consider yourself lucky if you have got one and still keeping in touch. Friends do often change over the course of years and hence the graph shows it peaks during the teenage and then gradually declines. Stay in touch with true ones and especially who travel with you through the good & bad phase of your life.

  • Time spent with Partner

For the majority of people, spending time with partner will be more compared to them spending time with parents, siblings, friends. People tend to move places for better work and they move along with partners & kids far off from their respective parents thanks to globalization.

  • Time spent with children

It is always a joy to re-learn with your kids and view the world again from the lens of them. From the graph, it shows the maximum peak between the age group 30 to 40 and then declines thereafter

  • Time spent with coworkers

This is one of the significant time you’re going to spend the most time outside of your family members. Getting the right workplace, right mentor and peers are key for your success in your professional career.

  • Time spent alone

No matter how you view your entire timeline of your life, you might end-up spending more time alone during your commute/travel hours and whatnot. Having a conscious daily routine would be key to better yourself each day.

There’s a famous liner – If you get one percent better each day for one year, you’ll end up thirty-seven times better by the time you’re done. Spend your lonely-time to see how you can improve in your personal and professional lives. Celebrate small wins, spend quality time with your closed ones. Live with content.
Analytics Industry Study – India – May 2021

You may be an experienced employee in the analytics space or an aspiring Data Scientist/Engineer or an Executive looking up to channelize your investments by creating business use-cases. Technologies like Data/Business analytics, AI/ML/DL, Data Engineering have been thriving in the market in terms of creating better career opportunities, aiding in bringing better customer experiences to your products/services.

According to Allied Market Research firm, the Global Big Data and Business Analytics market size was valued at $193.14 billion in 2019, and is projected to reach $420.98 billion by 2027, growing at a CAGR of 10.9% from 2020 to 2027. It’s promising to see the growth in this industry given that many client organizations are in the process of pivoting to Digital and undergoing a massive digital transformation exercises. This would only attribute to creating more business opportunities that could be uncovered by huge volumes of data using analytics.

In India, according to a recent 2021 study conducted by Analytics India Magazine, the market size of analytics industry in India is about $45.4 billion which has registered a growth of 26.5% YoY (last year, it was $35.9 billion).

There are a few insights I learnt from their study that I would like to share with you today –

  • Indian analytics industry to grow to a market size of $98 billion by 2025 and $118.7 billion by 2026
  • Analytics accounts for 23.4% in the Indian IT/ITES market size in 2021. This is projected to grow to 41.5% by 2026
  • BFSI sector (13.9%) saw the maximum analytics offering contribution compared to other sectors followed by Manufacturing, Retail & E-Commerce, Pharma & Healthcare, FMCG, Telecom, Media & Entertainment, Energy
  • Bengaluru (30.3%) is the top-most city in terms of analytics contribution followed by Delhi (26.2%), Mumbai (23.4%)
  • Analytics services – more than half (51.6%) of market share received from the U.S. Followed by U.K. (13.2%, Australia (8.3%), Canada (6.4%)
  • Among the analytics servicing companies, IT firms dominate the contribution at 43% with leading firms such as TCS, Accenture, Infosys, Cognizant, Wipro, IBM, Capgemini.

With respect to salary compensation, there are a few interesting points to note as well –

  • 41.5% of all the analytics professionals fall under the higher income level, greater than 10 Lakhs
  • Salary for an Analytics professional is 44% higher than that of a Software Engineer. This could be an attractive proposition for fresh or entry-level graduates to think analytics as a career option.
  • Data Engineers (14.9L per annum), Big Data Specialists (14.8L per annum) surpassed the median salary of AI/ML Engineers (14.6L per annum) by a narrow margin.
  • Python skill set saw the highest salary followed by SAS/R, QlikView/Tableau, PySpark/Hadoop

Here’s a break-down of the salary across different experience levels (Source: AIM). Due to several factors such as pandemic salary cuts, the salary during 2021 is slightly lesser than the previous year 2020.

Here’s a look across different industries and how they pay on an average –

Captive Centers, Consulting Firms pay higher than Domestic Firms (like Reliance), Boutique Analytics Firms, and IT Services

Hope this compilation of analytics industry outlook might give you some insights for you to focus and work towards your goals!

Credits: Analytics India Magazine (AIMResearch), Allied Market Research

5 Presentation Hacks To Captivate Your Audiences

Presentation skills – well, this skill can’t be emphasized enough in a client serving role.

In a typical business analytics engagement, our clients would wish to takeaway some important findings and recommendations.

For the project development team, it might be trendy to use all those fancy statistical & machine learning models with state-of-the art technologies (such as Cloud), programming and methodologies. At the end of the day, business users would only want to know the insights that we could show in a precise format for easier consumption and action.

Not all the business folks do comprehend the under-the-hood tools & programming. It is a responsibility for the project team to showcase the end results in the form of insights as part of presentations to the key stakeholders. More precisely, how does the solution might help the team in so many ways…

Brainstorming the storyline, preparing a skeleton out of the data & insights, adding relevant content, and reviewing internally multiple times before the actual presentation day is the usual practice.

On the presentation day, it is critical to better use of the stakeholders time and explain the key takeaways in your favorite presentation tool such as MS PowerPoint, Google Presentation Slides etc.

I came across these 5 interesting presentation hacks from Ivan Wanis Ruiz

1.Lazy Rule

Instead of writing down all the texts in a slide that explains the flow, make a slide that gives a gist out of it such that the audience could try to quickly skim through them and listen to you.

If your slide explains pretty much everything like a hand-out note, then there is no need of a presentation session! So, create a slide with only minimal words and complement that with your narrative.

2.Adding Visual Effects (Picture Superiority Effect)

A picture speaks a thousand words! Audiences do not have time to read & remember everything although they attempt to skim through them all to understand.

In order to make it easier, create your insights in the best visual format to quickly interpret and register in our memory. Our brains could process a visually appealing pictures better than a slide full of texts of different font and colors.

Understandably, it is also difficult for audience to listen and stare at the section of the slide as most often, they do not know which part of the slide you are narrating through the flow. This takes us to the next principle – magnification rule.

3.Magnification Rule

Explicitly, ask your audience to look at the section of the slide (you may also color code, accordingly) and begin your narration in a structured manner. This might sound simple but it would allow the audience to stay engaged with you.

4.Capitalize on ‘B’ and ‘W’

To make your presentation more engaging and personalized, you may ask a question, and explain your narrative with an experiment. During the presentation mode, press either ‘B’ or ‘W’ and start an illustrative example based on your context.

It could also be used when you want to write using a pen for a short illustration and then you may resume back to actual presentation by pressing the same key.

5.Repeating Agenda Strategy

Most often, your deck might be running on quite a many slides. It’s always a good practice to set an agenda, compartmentalize your slides accordingly, having a clear script written on what needs to be explained on a given slide.

The agenda, when repeating after each section, allows the audience to keep on track of which section has been completed/to be completed in the due course of the meeting. This also allows to shift sections in the interest of time and preferences.

For more information, please visit the author’s Udemy page.

Happy Presentations!

Introduction to Sentiment Analysis using Stanford NLP

Nowadays consumer forums, surveys, social media are generating huge text content like never before compared to the last decade.

Interesting use-cases can be brand monitoring using social media data, voice of customer analysis etc.

Thanks to research in Natural Language Processing (NLP), many algorithms, libraries have been written in programming languages such as Python for companies to discover new insights about their products and services.

Popular NLP Libraries in Python

NLTK (Natural Language Toolkit) is a huge corpus of human language data built as open-source package for Python language. It performs tasks such as tokenization, parsing, classification, tagging, semantic reasoning.

There are other prominent libraries in place. For instance, Textblob is built on top of NLTK package. And, there are other libraries such as Spacy, Gensim, Stanford Core NLP.

Common NLP Tasks

In a nutshell, there are many text related tasks we can think of such as tokenization, parts of speech (pos) tagger, named entity recognition, coreference resolution, sentiment analysis, stemming, lemmatization, stopwords removal, singularize/pluralize, ngram, spellcheck, summarizing text, topic modeling and the kind of linguistic languages we’re dealing with.

In this article, I’d like to share a simple, quick way to perform sentiment analysis using Stanford NLP.

The outcome of a sentence can be positive, negative and neutral. In general sense, this is derived based on two measures: a) Polarity and b) Subjectivity.

Polarity score ranges between -1 and 1, indicating sentiment as negative to neutral to positive whereas Subjectivity ranges between 0 and 1 indicating objective when it is closer to 0 – factual information and subjective when closer to 1.

Stanford NLP is built on Java but have Python wrappers and is a collection of pre-trained models. Let’s dive into few instructions…

  1. As a pre-requisite, download and install Java to run the Stanford CoreNLP Server.
  2. Download Stanford CoreNLP English module at https://stanfordnlp.github.io/CoreNLP/download.html#getting-a-copy
  3. Navigate to its path in your downloaded folder. Unzip the files. Go to your command prompt and type the following command to run its server. Note: -mx4g option is to state 4 gigabytes memory to be used.
java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 50000

Launch your Python Jupyter notebook or IDE (Ex: Spyder) and run down this code. Ensure you install StanfordCoreNLP package using PIP install command.

from pycorenlp import StanfordCoreNLP

nlp = StanfordCoreNLP('http://localhost:9000')

text = "The intent behind the movie was great, but it could have been better"
results = nlp.annotate(text,properties={
        'annotators':'sentiment, ner, pos',
        'outputFormat': 'json',
        'timeout': 50000,

for s in results["sentences"]:
    print("{} : {}".format(" ".join(t["word"] for t in s["tokens"]),s["sentiment"]))

Annotate allows us to call specific NLP tasks such as Sentiment analysis. It returns output in JSON format.

Once you run the code, you can terminate the Java server by typing Ctrl + C and hitting enter in the command prompt.

Stanford NLP supports multiple languages other than English. You can follow the documentation provided at https://stanfordnlp.github.io/CoreNLP/

For a sample quick analysis, try this link which connects you to Stanford NLP http://corenlp.run/ instance. Type a sentence and explore the visual representation of some of the analysis.

You can refer the same sample code on my GitHub: https://github.com/coffeewithshiva/Sentiment_Analysis_Stanford_NLP

On Textblob, I came across the below GitHub which might be extensively useful: https://github.com/shubhamjn1/TextBlob/blob/master/Textblob.ipynb

Happy NLP!

Top Use Cases of AI in Business

It appears as if the movie – Terminator – was released quite recently and many of us have talked about if machines could help us in our daily chore activities and supporting business operations.

Fast forward! We’re already realizing few changes around us where artificial intelligence enabled systems help us in many ways and the potential of it looks bright few years down the road.

I was researching about few top use cases of AI in business a couple of weeks. I thought to share it here and I’m sure you’re going to be excited to read & share.

Top Use Cases of AI in Business

1. Computer Vision – Smart Cars (Autonomous Cars): IBM survey results say 74% expected that we would see smart cars on the road by 2025. It might adjust the internal settings — temperature, audio, seat position, etc. — automatically based on the driver, report and even fix problems itself, drive itself, and offer real time advice about traffic and road conditions.

2. Robotics: In 2010, Japan’s SoftBank telecom operations partnered with French robotic manufacturer Aldebaran to develop Pepper, a humanoid robot that can interact with customers and “perceive human emotions.” Pepper is already popular in Japan, where it’s used as a customer service greeter and representative in 140 SoftBank mobile stores.

3. Amazon Drones: In July 2016, Amazon announced its partnership with the UK government in making small parcel delivery via drones a reality. The company is working with aviation agencies around the world to figure out how to implement its technology within the regulations set forth by said agencies. Amazon’s “Prime Air” is described as a future delivery system for safely transporting and delivering up to 5-pound packages in less than 30 minutes.

4. Augmented Reality: Ex: Google Glass: It can show the location of items you are shopping for, with information such as cost, nutrition, or if another store has it for less money. Being AI it will understand that you’re likely to ask for the weather at a certain time, or want reminders about meetings so it will simply “pop up” unobtrusively.

5. Marketing Personalization: Companies can personalize which emails a customer receives, which direct mailings or coupons, which offers they see, which products show up as “recommended” and so on, all designed to lead the consumer more reliably towards a sale. You’re probably familiar with this use if you use services like Amazon or Netflix. Intelligent machine learning algorithms analyze your activity and compare it to the millions of other users to determine what you might like to buy or binge watch next.

6. Chatbots: Customers want the convenience of not having to wait for a human agent to handle their call, or wait for a few hours to be replied to an email/twitter query. Chatbots are instant, 24×7 available & backed by robust AI offering Contextually relevant personalized conversation.

7. Fraud Detection: Machine learning is getting better and better at spotting potential cases of fraud across many different fields. PayPal, for example, is using machine learning to fight money laundering.

8. Personal Security: Airports – AI can spot things human screeners might miss in security screenings at airports, stadiums, concerts, and other venues. That can speed up the process significantly and ensure safer events.

9. Healthcare: Machine learning algorithms can process more information and spot more patterns than their human counterparts. One study used computer assisted diagnosis (CAD) when to review the early mammography scans of women who later developed breast cancer, and the computer spotted 52% of the cancers as much as a year before the women were officially diagnosed.

Data Extraction Limitations of Radian6, Sysomos That You Need To Know!

Application of Social Media Listening Tools

Social media data is vast – we all agree. As per this blog, 12.9M texts, 473k data in Twitter, 49k in Instagram, to call out a few sources, have been created in a single minute in 2018!

Text Analytics projects, notably, Social Media Analytics involves extracting huge data relevant to a particular industry/context. Some of the business objectives could be

(a) to identify the emerging trends from these conversations,

(b) to understand the sentiment of a specific brand/event etc.

A quick idea to extract the data from say Twitter, Instagram could be to register the API of individual sources and pull the data that we’re looking for. For selected blogs, forums, we may have to write a web scraping scripts using Python.

What if, there’s an aggregator which pulls massive data across the sources including the historical/past years data? In the market, there are popular social media listening tools such as Radian6 and Sysomos that cater to this requirement. Those tools index the data for every defined frequency and allows us to extract the data.

It would be a topic on another day about how to extract the data on Radian6 or Sysomos. In this article, I would like to list down the data limitations or constraints that I came across so far. By knowing these key constraints, you might plan your extraction phase of your project accordingly.

By the way, Radian6 was acquired by Salesforce and the product was then renamed & released as “Social Studio”.

Data Extraction Limitations of Salesforce Social Studio (formerly, Radian6)

1) In a single day, we can either extract 500k data or 3 months timeline at a single go, whichever is lower. If you want to extract 1 year of data on the topic “Indian Premier League” for instance, you can add the keywords and extract by 4 quarters – at the end, you would have four files indicating four quarters.

2) For Twitter, we can download only 50k data in a single day. Post that limit, it can only extract the External Ids using which we might need TweetPy to pass on the ids and fetch the corresponding tweets.

There’s a good possibility that, say, you have 10k tweets for the time period Jan – Dec 2015 on a selected topic/keywords extracted via Social Studio today, and when you run those external ids using TweetPy, just don’t get surprised if the data volume has reduced significantly. Time & again, Twitter removes the spam tweets and blocks the concerned users! Hence, you might see this mismatch in those numbers which is fine – we don’t really want to have the spam messages, after all.

Since Social Studio had historically indexed those spam tweets/blocked users, we can’t do anything about it. Wish if there was a feature built-in as part of this tool to check back on Twitter if those users were not blocked at least twice in a year to remove a major portion of junk data 🙂

Data Extraction Limitations of Sysomos

1) For Twitter, Forums, Blogs, News and Tumblr, the historical data can only go back up to 13months.

2) For youtube, the download limit for the mentions is 500 while for all the other sources, the download limit is 50,000 mentions per export. So, we need to shorten our date range and download the data in case if it exceeds 50k limit.

3) For Facebook and YouTube, the date limitations are what the API allows us to go back from so we cannot give an exact date.


1) Social Studio could extract the rolling 3 years data whereas Sysomos gives us the last 13 months.

2) Sysomos has sources such as Instagram and none of these listening tools have Pinterest yet.

3) We can’t add the data source, be it a new blog/forum, and hence we end up doing web scraping for custom requirements or websites.

4) The more generic keyword your input is, the more spam/irrelevant data your outcome would be! So, that’s the key challenge here. A case in point, for one of the products we’re extracting – “Kitchen Sink” – there are lots of idioms/phrases being pulled out. Ex: “Let it sink for a minute”. There’s an album called Kitchen Sink as well :). So, all these spams got to be cleaned prior to subsequent analyses.

Based on your requirements, you can choose the tool and extract the desired data.

P.S: The limitations would keep changing/being updated by the respective tools. I’ve written these based on the past 6 months usage.

Quite often, I hear these terms are being used interchangeably.

Are there any differences between dimensions, measures, metrics and KPI (Key Performance Indicator)? Yes, there are!

Let’s take a simple example to know what these terms actually mean.

If you’re the Sales Leader of a company, you would be interested to know the performance of a particular product line in a certain year. Let’s say, the sales of a particular version of Mi Mobile is registered as 250,000 units on a flash sale held online. In this case, the dimension is referred to the product type which is Mi Mobile whereas 250,000 units is the measure (aka values).

How about Metrics (aka Business Metrics) and KPI?

Business sets a target/objective every year for its product lines. The idea is to create & drive its strategies to realize the objectives throughout the year. Metric is a way to assess the performance of a particular division or at the company level. #Revenue is one of the business metrics and is assessed by comparing against its previous year, industry standards (benchmarks), competitors.

There can be multiple metrics a company can devise and track during the year. However, there has to be certain “key” metrics which the business wants to keep a tab on a frequent basis. Those key performance metrics determine the health of the organization. In the event of any way off from the objectives, the business strives hard to do a course correction on its strategies.

KPI or simply a metric is a combination of 2 or more measures.

A simple KPI can be, #Sales of Mi Mobile in 2017 against the previous year. Assume the target set by the business in 2017 to be 500,000 units in a geographical location. The business can validate and see where they can invest further to grow their sales numbers. Popular Brands like Mi which sells primarily on online eCommerce websites have now ventured into offline stores for further growth.

For the Services industry, Customer Retention Rate would be the key. After all, retaining a customer costs relatively lesser than acquiring a new customer. Companies focus on retaining the most profitable customers as they bring in the maximum value for the top-line of the business.

Your KPI should be well defined and relevant to the business. Notably, the corresponding business stakeholders should be aligned on the same as well. A good KPI will definitely add value in measuring your performance of the business as it’s quantifiable. A bad KPI might mislead you from the focus & achieving your target.

A Scorecard or a Dashboard can be used to keep a track of the KPI metrics on a daily/weekly/monthly/quarterly/yearly basis. There are tools such as Tableau Public, MS Power BI to load your visualizations and share it among the stakeholders.

What’s trending: Big Data vs Machine Learning vs Deep Learning?

If you’re new to Analytics, you might encounter too many topics to explore in this particular field starting from Reports, Dashboards, Business Intelligence to Data Visualization to Data Analytics, Big Data to AI, Machine Learning, Deep Learning. The list is incredibly overwhelming for a newbie to begin his/her journey.

I really wanted to rank and check which one is currently trending relative to each topic among these five buzzwords: “Business Intelligence”, “Data Analytics”, “Big Data”, “Machine Learning”, “Deep Learning”.

I made use of my favorite Google Trends tool for my reference purpose. I’m interested to assess based on the worldwide data for last 5 years using “Google” search engine queries as the prime source.

Analytics Trends 1
Analytics Trends 1

I inferred the following from the above user-searched data:

  1. Big Data stayed at the top of the users’ mind for quite long time since 2012. However, Machine Learning is soaring higher from 2015, and it could potentially overtake Big Data in a year as the “hottest” skill-set to have for any aspiring Analytics professional.
  2. Deep Learning is an emerging space! It would eventually gain more momentum in 1 year from now. It’s essential to gain the knowledge of Machine Learning concepts prior to learning about Deep Learning.
  3. Needless to say, Data Analytics field is also growing moderately. For beginners, this could be the best area to begin your journey.
  4. BI space is starting to lose out its focus among the users thanks to self-service BI portals (and automation of building reports/dashboards), Advanced Analytics.


I happened to see few additional interesting insights when I drilled it down at the industry-wise.

  1. Data analytics is still the hot topic for Internet & Telecom
  2. Big data for Health, Government, Finance, Sports, Travel to name a few
  3. BI for Business & Industrial
  4. Machine Learning for Science


Users interest by Region says that China is keen on Machine Learning field and Japan on Deep Learning. Overall, Big Data still spread all over the world as the hot-topic for time being. Based on the above graphs, it’s quite evident that Machine Learning would turn out to be the top-most skill set for any Analytics professional to have at his/her kitty.

You can go through this Forbes article to understand the differences between Machine Learning and Deep Learning at a high level.

Pls let me know what you think would be the hottest topic of interest in the Analytics spectrum.