close_game
close_game

Digital monitors need right metrics to polish experiences: Mozark’s Kartik Raja

May 31, 2024 03:07 PM IST

Experiences which used to composed 50% in app and 50% from third party apps, has already skewed to 20% and 80%, Kartik Raja of digital monitoring platform Mozark explains to HT

Have you ever wondered that when you use an app or a digital platform, shop online or offline or even proudly share screenshots of 5G speed tests from where you live, who really manages that experience? That’s exactly the work of digital monitors, often the unsung heroes of our digital experiences. Indian tech company Mozark is an example. It’s founder Kartik Raja believes that while the metrics of identifying points of failure varies depending on genre of the app, specifics of the platform or its intricacies, the basics don’t change. “I think, at the very basic level, it’s about devices, network, the middle network and app as the four broad layers,” he tells HT.

A glimpse of the Mozark interface. (Handout photo.)
A glimpse of the Mozark interface. (Handout photo.)

For a company that is working across industries for online and offline experiences in India and abroad, as well as governments in many countries to build the experience layer for their services (Federal Communications Commission in the USA and the Department of ICT in the Philippines are examples), Raja understands what he calls the “different sets of challenges” each vertical potentially pose. Airtel, Sony Liv, French multinational retail chain Carrefour and the Marina Bay Sands, Changi Airport and ICICI Bank, some illustrations. As HT sits down for a conversation with Raja, he talks about the use of AI and emerging tech to build solutions, the challenges verticals tend to have unique to themselves, in-house and Google’s artificial intelligence models and how the apps we interface with are changing from the foundation. Edited excerpts.

Q. How has Mozark been able to use AI and other emerging tech to build solutions across spaces as diverse and banking, fintech, OTT, hospitality and telecom?

Kartik Raja: If you see the trend of AI, what you will find is that in the last one year, especially in the software development life cycle, a lot of development has become much faster. We have about the same team that we had last year when we used to do one release every three months, but now we do a release every 15 days. That’s because it takes us 15 days to validate and test. In the future that everyone talks about, we’re going to go to AI ops where all the operations will be automatic. AI will fix everything. We’re moving from pure play development to doing all this testing, validation and then a continuous monitoring. Our platform is one of the world’s first monitoring platforms.

When you see anything digital today, two things stand out – most of the times you’re not within that digital platform, but with some third-party digital platform. Let me give you an example. Let’s say you are using Ola or Uber. Within that, you’re using Google Maps or Apple Maps and then PhonePe or Google Pay to make the payment. You’re watching a video on Netflix? It’s on somebody else’s CDN (or content delivery network). So, how do you monitor when it doesn’t belong to you? We’re able to do that with advanced robotics and be able to test a lot of things, but when you are monitoring something in a commercial setting, these don’t fall under your premise and therefore we can’t monitor. Quite literally, human beings sit down, bring things on the screen and test. Every time there is a new release, you have somebody, physically do this testing across different sets of devices, networks, and other apps. They’ll have to check everything if Google does an Android update. Anywhere from 100 to 1000 people are just checking whether bill payments are working for a particular gas company or an electricity company.

Q. Any interesting examples within those?

KR: We actually started using AI and building on AI before ChatGPT came along, because Google gave us access to their version of AI image processing. We have won an award from Google for the most innovative uses of AI. We can mimic all kinds of test cases, and analyse what is going wrong Is it something in the back end, at the server? Is it something at the app end? Is it something with the device? These are typically the three things that can go wrong.

About two years ago, the FIFA World Cup was one of the first truly digital global events. In Qatar, they had not just stadia, but fan zones as well. What was interesting about this live experience was that people weren’t just watching, but they’re also communicating digitally at the same time. What they found was suddenly in the stadium, the network was slowing down. It was figured that that there was one particular emoji of Messi which was cached in Bahrain and not in Qatar. Just making the change real time and storage of the cache was a matter of one minute, and the networks were back up again. If you do generative AI operations, this will happen very quickly.

Q. How difficult is it, and how many metrics or points of failure do you monitor, considering so many apps have become multi-layered complexities?

KR: It’s basically about devices, the network, middle network and app as the four broad layers. What ends up changing though, like you rightly said, is that experiences which used to composed 50% in app and 50% from third party apps has already gone to 20% and 80%. Very soon, it’ll probably be 1% and 99%. I mean, the core function of an app is going to be just 1% and 99% is going to come through API calls and things which you go from outside. If you’ve ever tried to check your bank balance, you’ll realise it’s faster to do it with PhonePe or Google Pay than it is with your bank’s own app.

That’s really where we see that the more the complexity there is, the more important it is for them to get this level of information right. That’s what we provide. Even if app developers are able to identify within the back end what happened, but they need a level of information, and that’s really where we provide a lot of value. Secondly, if it’s an AI which is going to do analytics, they need the correct models.

Q. What AI models do you use, and are these developed in-house?

KR: There are two aspects to this. We use Google’s Gemini for now. It’s one of the best AI models at this time, and we started off using the earlier versions, before they called it Gemini. That’s what we would build it on, initially, and of course, as newer models on GPT imaging and come along, we will start doing that. But where the sophistication of these models comes from, is the fact that as we look at it across 200 to 300 different types of screens and devices, including large TV, small TVs, cheaper handsets, more expensive handsets, and so on.

Elevate your career with VIT’s MBA programme that has been designed by its acclaimed faculty & stands out as a beacon for working professionals. Explore now!

See more

Get latest updates on Petrol Price along with Gold Rate , Today Weather and Budget 2024 at Hindustan Times.

SHARE THIS ARTICLE ON
Share this article
SHARE
Story Saved
Live Score
OPEN APP
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Monday, July 01, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On