Figure Unveiled a Humanoid Robot in Partnership with OpenAI

A yet another milestone in the history of A.I. and Robotics!

Yes, I’m not exaggerating! What you could potentially read in a moment would be a futuristic world where humanoid robots can very well serve humanity in many ways (keeping negatives out of the picture for timebeing).

When I first heard this news, movies such as I, Robot and Enthiran, the Robot were flashing on my mind! Putting my filmy fantasies aside, the Robotics expert company Figure, in partnership with Microsoft and OpenAI, has released the first general purpose humanoid robot – Figure 01 – designed for commercial use.

Here’s the quick video released by the creators –

Figure’s Robotics expertise has been perfectly augmented by OpenAI’s multi-modal support in understanding and generating response of visual inputs such as image, audio, video. The future looks way more promising and becoming reality that these humanoids can be supplied to the manufacturing and commercial areas where there are shortage of resources for scaling the production needs.

In the video, it is seen demonstrating the ability to recognize objects such as apple and take appropriate actions. It is reported that Figure 01 humanoid robot stands at 5 feet 6 inches tall and weighs 132 pounds. It can carry up to 44 pounds and move at a speed of 1.2 meters per second.

Figure is backed by tech giants such as Microsoft, OpenAI Startup Fund, NVIDIA, Jeff Bezos (Bezos Expeditions) and more.

Lot of fascinating innovations happening around us thanks to Gen AI / LLMs, Copilot, Devin, Sora, and now a glimpse into the reality of Humanoid Robotics. Isn’t it a great time to be in?!

Meet Devin, the first AI-based Software Engineer

Gen AI enables writing highly sophisticated code for the given problem statement. Developers can already take advantage of that!

What if a full-fledged tool that can write code, fix bugs, leverages online resources, collaborates with human, and solves gigs on popular freelancing sites such as Upwork?!

Is this a fiction? Well, not anymore.

Meet Devin, the first of its kind, AI-based software engineer, created by Cognition Labs, an applied AI labs company that builds apps focusing on reasoning.

The Tech World is already amazed with the capabilities of Copilot which assists in developing code snippets, however, Devin has a unique capability and is a step-up in terms of its features that it can cater to end-to-end software development.

According to the creators, Devin has the following key capabilities as of writing –

  1. Learn how to use unfamiliar technologies.
  2. Build and deploy apps end to end.
  3. Autonomously find and fix bugs in codebases.
  4. Train and fine tune its own AI models.
  5. Address bugs and feature requests in open source repositories.
  6. Contribute to mature production repositories.
  7. Solve real jobs on Upwork!

Scott Wu, the founder and CEO of Cognition, explained Devin can access common developer tools, including its own shell, code editor and browser, within a sandboxed compute environment to plan and execute complex engineering tasks requiring thousands of decisions. 

Devin resolved 13.86% of issues without human assistance in the tested GitHub repositories as per the publication by creators based on SWE-benchmark that asks agents to resolve challenging problems in the open-source projects such as scikit-learn, Django.

There’s sparkling conversation around the globe that AI could kill basic coding skills written by human and recently NVidia Founder talked about everyone is now a programmer thanks to AI. Of course, I think, human oversight is required to refine and meet user’s requirements.

Thanks to Devin, now the human can focus more on complex or interesting problems that requires our creativity and best use of our time. As of now, access to Devin is only limited to select individuals. Public access is still pending. For more info, visit

Guess which industry tops the AI maturity index

Hey there! I came across this article from Accenture Research capturing the AI maturity index across various industries in 2021 and 2024 (estimated).

It’s quite obvious that Tech industry steals the AI show here! The range of companies – Google, Meta, Amazon, Apple, Microsoft etc., are striving hard to innovate and compete to gain the market space when it comes to AI-led products and solutions.

Automotive bags the number two spot thanks to AI-led self-driving / autonomous vehicles trend. Followed by Aerospace and Defence for AI-enabled remote systems. Life Sciences companies conduct experiments to reduce the drug development time using AI.

Accenture Research reveals that there are enormous opportunities for companies to seize in this space.

One thing that I particularly find it surprising is, Banking & Insurance industry which show relatively lower AI maturity compared to other industries. In general, BFSI as a sector, undertakes IT and Data or AI-led projects in-house (global capability centers) or using outsourced partners. BFSI has lot of room for AI penetration across their functions such as Customer Experience, Sales & Marketing, Finance & Investments.

Common challenges plaguing the AI adoption indicated in the research are

  • Legal and regulatory challenges
  • Inadequate AI infrastructure
  • Shortage of AI-trained workers

Meta’s Large Language Model – LLaMa 2 released for enterprises

Meta, the parent company of Facebook, unveiled the latest version of LLaMa 2 for research and commercial purposes. It’s released as open-source unlike OpenAI GPT / Google Bard which is proprietary.

What is LLaMa?

LLaMa (Large Language Model Meta AI) is an open-source language model built by Meta’s GenAI team for research. LLaMa 2 which is newly released for research and commercial uses.

Difference between LLaMa and LLaMa 2

LLaMa 2 model was trained on 40% more data than its predecessor. Al-Dahle (vice president at Meta who is leading the company’s generative AI work) says there were two sources of training data: data that was scraped online, and a data set fine-tuned and tweaked according to feedback from human annotators to behave in a more desirable way. The company says it did not use Meta user data in LLaMA 2, and excluded data from sites it knew had lots of personal information. 

Newly released LLaMa 2 models will not only further accelerate the LLM research work but also enable enterprises to build their own generative AI applications. LLaMa 2 includes 7B, 13B and 70B models, trained on more tokens than LLaMA, as well as the fine-tuned variants for instruction-following and chat. 

According to Meta, its LLaMa 2 “pretrained” models are trained on 2 trillion tokens and have a context window of 4,096 tokens (fragments of words). The context window determines the length of the content the model can process at once. Meta also says that the LLaMa 2 fine-tuned models, developed for chat applications similar to ChatGPT, have been trained on “over 1 million human annotations.”

Databricks highlights the salient features of such open-source LLMs:

  • No vendor lock-in or forced deprecation schedule
  • Ability to  fine-tune with enterprise data, while retaining full access to the trained model
  • Model behavior does not change over time
  • Ability to serve a private model instance inside of trusted infrastructure
  • Tight control over correctness, bias, and performance of generative AI applications

Microsoft says that LLaMa 2 is the latest addition to their growing Azure AI model catalog. The model catalog, currently in public preview, serves as a hub of foundation models and empowers developers and machine learning (ML) professionals to easily discover, evaluate, customize and deploy pre-built large AI models at scale.

OpenAI GPT vs LLaMa

A powerful open-source model like LLaMA 2 poses a considerable threat to OpenAI, says Percy Liang, director of Stanford’s Center for Research on Foundation Models. Liang was part of the team of researchers who developed Alpaca, an open-source competitor to GPT-3, an earlier version of OpenAI’s language model. 

“LLaMA 2 isn’t GPT-4,” says Liang. Compared to closed-source models such as GPT-4 and PaLM-2, Meta itself speaks of “a large gap in performance”. However, ChatGPT’s GPT-3.5 level should be reached by Llama-2 in most cases. And, Liang says, for many use cases, you don’t need GPT-4.

A more customizable and transparent model, such as LLaMA 2, might help companies create products and services faster than a big, sophisticated proprietary model, he says. 

“To have LLaMA 2 become the leading open-source alternative to OpenAI would be a huge win for Meta,” says Steve Weber, a professor at the University of California, Berkeley.   

LLaMA 2 also has the same problems that plague all large language models: a propensity to produce falsehoods and offensive language. The fact that LLaMA 2 is an open-source model will also allow external researchers and developers to probe it for security flaws, which will make it safer than proprietary models, Al-Dahle says. 

With that said, Meta has set to make its presence felt in the open-source AI space as it has announced the release of the commercial version of its AI model LLaMa. The model will be available for fine-tuning on AWS, Azure and Hugging Face’s AI model hosting platform in pretrained form. And it’ll be easier to run, Meta says — optimized for Windows thanks to an expanded partnership with Microsoft as well as smartphones and PCs packing Qualcomm’s Snapdragon system-on-chip. The key advantage of on-device AI is cost reduction (cloud per-query costs) and data security (as data solely remain on-device)

LLaMa can turn out to be a great alternative for pricy proprietary models sold by OpenAI like ChatGPT and Google Bard.


Difference between traditional AI and Generative AI

Generative AI is the new buzzword since late 2022. The likes of ChatGPT, Bard, etc. is taking the AI to the all new levels with wide variety of use-cases for consumers and enterprises.

I wanted to briefly understand the difference between traditional AI and generative AI. According to a recent report published in Deloitte, GenAI’s output is of a higher complexity while compared with traditional AI.

Typical AI models would generate output in the form of a value (Ex: predicting sales for next quarter), label (Ex: classifying a transaction as legitimate or fraud). GenAI models tend to generate a full page of composed text or other digital artifact. Applications like Midjourney, DALL-E produces images, for instance.

In the case of GenAI, there is no one possible correct answer. Deloitte study reports, this results in a large degree of freedom and variability, which can be interpreted as creativity.

The underlying GenAI models are usually large in terms of resources consumption, requiring TBs of high-quality data processed on large-scale, GPU-enabled, high-performance computing clusters. With OpenAI’s innovation being plugged into Microsoft Azure Services and Office suites, it would be interesting to see the dramatic changes in consumers’ productivity!

T-shaped vs V-shaped path in your Analytics career

We start with learning multiple disciplines in an industry and then niche down to a specific skill that we master over the period of time to get expertise and become an authority in that space.

Typically, many including me follow a T-shaped path in the career journey where horizontal bar ‘T’ refers for wide variety of generalized knowledge / skills whereas vertical bar ‘T’ refers to depth of knowledge in a specific skill. For instance, if you’re a Data Scientist, you still do minimal Data Pre-Processing steps before doing the Exploratory Data Analysis, Model Training / Experimentation and Selection based on evaluation metrics. Although a Data Engineer or a Data Analyst, primarily works on data extraction, processing and visualization, a Data Scientist might still need to be familiar in order to get the job done on time without depending on the other team members.

Data Scientist’s vertical bar ‘T’ refers to crafting the best models for the dataset and horizontal bar ‘T’ could refer to Data processing (cleaning, transformation etc.) and visualizing the KPIs in the form of insights for the business to take informed decisions.

Strategy & Leadership consultant and author, Jeroen, comes up with a V-shaped path which makes sense in our contemporary economic situation where layoffs news are on the buzz across many MNC companies.

In terms of similarities, the author, reiterates that both models address the fact that understanding one focus area deeply and having shallow knowledge across other areas. V-shaped model refers to having one deep knowledge and a lot of adjacent knowledge areas which are not deeper but not shallow either, somewhere in between. Jeroen describes as, “It is medium-deep, medium-broad, enabling us to be versatile and agile.”

For illustration, if the Data Scientist aspires to go above and beyond the expectations, he/she can technically collaborate with Data Engineers, performs AI/ML modeling stuffs, builds reports/dashboards, generate meaningful insights, and enable end-user adoption of insights. It has a combination of hard and soft skills! Soft skills such as storytelling, collaboration with peers, project management etc., Over the period of time, as one repeats this whole process, they can get better and better (develop deeper knowledge) with model development and management, and develop adjacent soft skills to excel at work.

In my view, I think, we start with a T-shaped path and eventually, it morphs into a V-shaped career path as we put our hard-work on one skill and also develop its associated adjacent skills. And, it applies to any field that you’re in.

How long do you think it would take this transformation to attain a V-shaped path? Will this take about 10,000 hours (~a decade) as per Gladwell’s book: “Outliers” to become an expert? Maybe, yes! Sooner, the better it is!!

I’ll leave you with a three-phase approach to becoming an expert according to the author Jeroen.

Image Credits:

How people spend their time?

In this fast-paced world, do we take a pause and retrospect where our significant part of our time actually goes into, and who we spend the most of our times over the course of our lives? I think it’s important for us to ponder over these and consider taking some corrective actions depending on our life’s priorities.

I am sharing these interesting insights which I got from a couple of sources.

Our World in Data has published how people spend their average time per day by comparing the data across a few selected countries. The dimensions used to compare are Work, Sleep, Eat, Other leisure activities.

  • China puts in 2x more work hours compared to countries such as Italy, and it presents a high correlation between work & sleep in a way that the people in China dedicates more time for sleeping than any other countries listed above.
  • Countries like Italy, Finland, Norway, Denmark, Germany, Belgium indulge in more leisure activities than other countries
  • People in USA and India consider to be sleeping more than the average for 8 hours and 48 mins. It’s surprising for me to see India emerged as the top in this data point! South Korea sleep the least as per the list
  • France, Spain, Italy, Greece appear to be spending more time in eating & drinking whereas USA is the least

A general pattern found is that the people in rich countries afford to work less and spend quality time with leisure activities. There is a strong correlation with happiness index as well which signifies people spending quality, leisure time are happier than other countries who spend less time on leisure. For instance, Finland has been honored as the happiest country in the world and their people spend more time in leisure activities.

While these insights are the country-level, I want to refer to an another source which happens to be a Twitter thread of Sahil Bloom. He summarized some key insights on who we spend our time with over the course of our lives. The source data corresponds to American Time Use Survey published in Our World in Data

  • Time spent with Family

As we grow from toddler to adult, we move places for work, settle across different cities & countries. This graph shows clearly that we spend lesser number of time with our parents and siblings. I can’t disagree with what Sahil beatifically mentions, “Prioritize and cherish every moment”. If you get a chance to spend your whole life with your parents, consider yourself lucky as many people are not privileged for various reasons.

  • Time spent with friends

Getting true friends has become rare these days. Again, consider yourself lucky if you have got one and still keeping in touch. Friends do often change over the course of years and hence the graph shows it peaks during the teenage and then gradually declines. Stay in touch with true ones and especially who travel with you through the good & bad phase of your life.

  • Time spent with Partner

For the majority of people, spending time with partner will be more compared to them spending time with parents, siblings, friends. People tend to move places for better work and they move along with partners & kids far off from their respective parents thanks to globalization.

  • Time spent with children

It is always a joy to re-learn with your kids and view the world again from the lens of them. From the graph, it shows the maximum peak between the age group 30 to 40 and then declines thereafter

  • Time spent with coworkers

This is one of the significant time you’re going to spend the most time outside of your family members. Getting the right workplace, right mentor and peers are key for your success in your professional career.

  • Time spent alone

No matter how you view your entire timeline of your life, you might end-up spending more time alone during your commute/travel hours and whatnot. Having a conscious daily routine would be key to better yourself each day.

There’s a famous liner – If you get one percent better each day for one year, you’ll end up thirty-seven times better by the time you’re done. Spend your lonely-time to see how you can improve in your personal and professional lives. Celebrate small wins, spend quality time with your closed ones. Live with content.
Analytics Industry Study – India – May 2021

You may be an experienced employee in the analytics space or an aspiring Data Scientist/Engineer or an Executive looking up to channelize your investments by creating business use-cases. Technologies like Data/Business analytics, AI/ML/DL, Data Engineering have been thriving in the market in terms of creating better career opportunities, aiding in bringing better customer experiences to your products/services.

According to Allied Market Research firm, the Global Big Data and Business Analytics market size was valued at $193.14 billion in 2019, and is projected to reach $420.98 billion by 2027, growing at a CAGR of 10.9% from 2020 to 2027. It’s promising to see the growth in this industry given that many client organizations are in the process of pivoting to Digital and undergoing a massive digital transformation exercises. This would only attribute to creating more business opportunities that could be uncovered by huge volumes of data using analytics.

In India, according to a recent 2021 study conducted by Analytics India Magazine, the market size of analytics industry in India is about $45.4 billion which has registered a growth of 26.5% YoY (last year, it was $35.9 billion).

There are a few insights I learnt from their study that I would like to share with you today –

  • Indian analytics industry to grow to a market size of $98 billion by 2025 and $118.7 billion by 2026
  • Analytics accounts for 23.4% in the Indian IT/ITES market size in 2021. This is projected to grow to 41.5% by 2026
  • BFSI sector (13.9%) saw the maximum analytics offering contribution compared to other sectors followed by Manufacturing, Retail & E-Commerce, Pharma & Healthcare, FMCG, Telecom, Media & Entertainment, Energy
  • Bengaluru (30.3%) is the top-most city in terms of analytics contribution followed by Delhi (26.2%), Mumbai (23.4%)
  • Analytics services – more than half (51.6%) of market share received from the U.S. Followed by U.K. (13.2%, Australia (8.3%), Canada (6.4%)
  • Among the analytics servicing companies, IT firms dominate the contribution at 43% with leading firms such as TCS, Accenture, Infosys, Cognizant, Wipro, IBM, Capgemini.

With respect to salary compensation, there are a few interesting points to note as well –

  • 41.5% of all the analytics professionals fall under the higher income level, greater than 10 Lakhs
  • Salary for an Analytics professional is 44% higher than that of a Software Engineer. This could be an attractive proposition for fresh or entry-level graduates to think analytics as a career option.
  • Data Engineers (14.9L per annum), Big Data Specialists (14.8L per annum) surpassed the median salary of AI/ML Engineers (14.6L per annum) by a narrow margin.
  • Python skill set saw the highest salary followed by SAS/R, QlikView/Tableau, PySpark/Hadoop

Here’s a break-down of the salary across different experience levels (Source: AIM). Due to several factors such as pandemic salary cuts, the salary during 2021 is slightly lesser than the previous year 2020.

Here’s a look across different industries and how they pay on an average –

Captive Centers, Consulting Firms pay higher than Domestic Firms (like Reliance), Boutique Analytics Firms, and IT Services

Hope this compilation of analytics industry outlook might give you some insights for you to focus and work towards your goals!

Credits: Analytics India Magazine (AIMResearch), Allied Market Research

5 Presentation Hacks To Captivate Your Audiences

Presentation skills – well, this skill can’t be emphasized enough in a client serving role.

In a typical business analytics engagement, our clients would wish to takeaway some important findings and recommendations.

For the project development team, it might be trendy to use all those fancy statistical & machine learning models with state-of-the art technologies (such as Cloud), programming and methodologies. At the end of the day, business users would only want to know the insights that we could show in a precise format for easier consumption and action.

Not all the business folks do comprehend the under-the-hood tools & programming. It is a responsibility for the project team to showcase the end results in the form of insights as part of presentations to the key stakeholders. More precisely, how does the solution might help the team in so many ways…

Brainstorming the storyline, preparing a skeleton out of the data & insights, adding relevant content, and reviewing internally multiple times before the actual presentation day is the usual practice.

On the presentation day, it is critical to better use of the stakeholders time and explain the key takeaways in your favorite presentation tool such as MS PowerPoint, Google Presentation Slides etc.

I came across these 5 interesting presentation hacks from Ivan Wanis Ruiz

1.Lazy Rule

Instead of writing down all the texts in a slide that explains the flow, make a slide that gives a gist out of it such that the audience could try to quickly skim through them and listen to you.

If your slide explains pretty much everything like a hand-out note, then there is no need of a presentation session! So, create a slide with only minimal words and complement that with your narrative.

2.Adding Visual Effects (Picture Superiority Effect)

A picture speaks a thousand words! Audiences do not have time to read & remember everything although they attempt to skim through them all to understand.

In order to make it easier, create your insights in the best visual format to quickly interpret and register in our memory. Our brains could process a visually appealing pictures better than a slide full of texts of different font and colors.

Understandably, it is also difficult for audience to listen and stare at the section of the slide as most often, they do not know which part of the slide you are narrating through the flow. This takes us to the next principle – magnification rule.

3.Magnification Rule

Explicitly, ask your audience to look at the section of the slide (you may also color code, accordingly) and begin your narration in a structured manner. This might sound simple but it would allow the audience to stay engaged with you.

4.Capitalize on ‘B’ and ‘W’

To make your presentation more engaging and personalized, you may ask a question, and explain your narrative with an experiment. During the presentation mode, press either ‘B’ or ‘W’ and start an illustrative example based on your context.

It could also be used when you want to write using a pen for a short illustration and then you may resume back to actual presentation by pressing the same key.

5.Repeating Agenda Strategy

Most often, your deck might be running on quite a many slides. It’s always a good practice to set an agenda, compartmentalize your slides accordingly, having a clear script written on what needs to be explained on a given slide.

The agenda, when repeating after each section, allows the audience to keep on track of which section has been completed/to be completed in the due course of the meeting. This also allows to shift sections in the interest of time and preferences.

For more information, please visit the author’s Udemy page.

Happy Presentations!