Discover how Google dev communities are harnessing AI! 🚀
🪙 Dhwani Vaishnav, GDSC Lead, celebrates Gemini’s token upgrade to 2M.
💻 Daisy Mayorga, Women Techmakers, uses AI for CS education and medical translations.
🟢 Angelica Oliveira, Android Google Developer Experts, generates screens from images in Android Studio.
Join a community → https://lnkd.in/enFKh5xD
Exciting release for creators and devs with Gemini 1.5 Pro! 🚀 Audio understanding + unlimited file use + JSON mode = endless possibilities.
No waitlist, plenty of new use cases that can be covered, check Liam Bolling twitter post!
Can't wait to see all the experiments going with Gemini AI in the next few days!
#ai#google
ICYMI, check out these exciting developer announcements from I/O today:
⚡ New Gemini Flash model, optimized for high-frequency tasks where speed of response time matters most
💻 2M token context window and context caching coming to Gemini Pro
🖼 Major updates to Google's open models, including Gemma 2 and PaliGemma, our first vision-language open model, optimized for image captioning, visual Q&A and other image labeling tasks.
Want more? Take a look at the article linked below!
LLM APIs have revolutionized the use of AI in our daily lives. With LLMs, it's now easier than ever to integrate AI into day-to-day applications. In this demo, a DevRev snap-in is used to fetch Google app store reviews of the LinkedIn app, and the Mixtral-8x7B model was used to identify app reviews that were specifically requesting a new feature. The entire code for fetching Google Play store reviews, performing classification, and loading it in DevRev is just ~100 lines. Kudos to Harpinder Jot Singh for making it possible! Check out the code here: https://lnkd.in/gm7YZVCt
Do these kind of use cases of LLM get you excited? DevRev is organizing a 10-day long online hackathon on a very similar topic with exciting prizes, so be sure to check out the details here: https://lnkd.in/g-fxYNvg . Let's see what we can create together!
Google said it plans to expand the advanced version of this large language model (LLM) next year.
The LLM is multimodal, meaning it can understand different types of information, including text, audio, images, and video.
Gemini will be available in three models:
1️⃣ Gemini Ultra, the largest and most capable, for highly complex tasks
2️⃣ Gemini Pro, for a wide range of tasks
3️⃣ Gemini Nano, for Android users who want to build Gemini-powered apps.
Follow us for more updates like this
#googleupdates#gemini#googleai#artificialintelligencetechnology#Frescowebservices
Exciting news from I/O 2024! Google has just unveiled groundbreaking tools that could revolutionize how our kids engage with programming. From the highly advanced Gemini 1.5 Pro AI model to innovative generative tools for music and video creation, the possibilities are endless. It's an exhilarating time with Google pushing the boundaries, especially amidst competition with OpenAI's ChatGPT 4o.
Key highlights include:
Gemini 1.5 Pro: Now equipped with audio and image understanding, delivering fast and efficient results.
Imagen 3 & Veo: Cutting-edge tools empowering lifelike image and video generation.
LearnLM: AI models specifically fine-tuned to enhance educational experiences.
These cutting-edge technologies are poised to reshape the future of tech education. Stay tuned as we explore the profound impact they'll have on shaping the next generation of coders.
Learn more about these advancements here: https://lnkd.in/gTvKxRNF#GeminiPro#CuttingEdgeTech#NextGenCoders
today in AI:
1/ Mike Krieger joins Anthropic as Chief Product Officer. FYI, Mike is the co-founder and ex-CTO of Instagram. Recently, he built Artifact a news-sharing app which got acquired by Yahoo. Anthropic is focusing on building Claude into an everyday app for the workspace.
2/ OpenAI and Google have made big announcements this week but most of the features are yet unreleased. We gotta make do with the demos. Pietro made an Astra-like voice assistant using Gemini 1.5 Pro Flash: https://lnkd.in/dZfJcwCv#AI#OpenAI
The opportunity to bring value by leveraging LLMs is opening a new wave of possibilities in both enterprises and startups. Doing this type of simple use cases one at a time creates a huge catalogue of apps that serve broad swath of users. Many companies are already working on initiatives leveraging and implementing LLMs into their products. Many of these existing organizations are ahead of the game. I hardly cover a few of the use cases.
Imagine an enterprise leveraging this for customer service and support by bringing meaningful pain points and classifying them automatically and addressing them for you? #servicenow#zendesk#atlassian
Mining analysts and critics comments and reviews to learn what they really meant?
Legal teams extracting contract terms and validating violations? #docusign#ironclad#conga
Sales teams learning which type of customer converts better based on various conversations captured? Oh boy there are a ton of companies in this space.
Understanding an end to end impact of a developers code in making a huge impact on revenue? #devrev
Extracting highlights from our own ticketing systems and summarizing pain points and our failures to react quickly in product development.
Giving ability to ask questions to an analytics app and get results without using an analyst or spreadsheet #thoughtspot
Connecting to CRM system and extract a quote and create an order in the ERP? Many integration tools are already doing this.
A number of security violations causing pain for a small team and applying LLMs with specific rules to filter out critical issues from an ocean of false alarms and alerts? there are several working in this space I know will tag this post :-)
Just bringing all of the enterprise documents, videos and images to create searchable asset #vectara
LLM APIs have revolutionized the use of AI in our daily lives. With LLMs, it's now easier than ever to integrate AI into day-to-day applications. In this demo, a DevRev snap-in is used to fetch Google app store reviews of the LinkedIn app, and the Mixtral-8x7B model was used to identify app reviews that were specifically requesting a new feature. The entire code for fetching Google Play store reviews, performing classification, and loading it in DevRev is just ~100 lines. Kudos to Harpinder Jot Singh for making it possible! Check out the code here: https://lnkd.in/gm7YZVCt
Do these kind of use cases of LLM get you excited? DevRev is organizing a 10-day long online hackathon on a very similar topic with exciting prizes, so be sure to check out the details here: https://lnkd.in/g-fxYNvg . Let's see what we can create together!
This is crazy! #ContextWindow of 1M Tokens ! By comparison #ChatGPT4 has a content window of 128k Tokens! 👀
Here is a basic crash course in Tokens:
↳LLMs accept input in Tokens and a model can only understand or intake limited input size.
↳1 Token = 4 CHARACTERS i.e. ABCD
↳So 128k Token = 512800 Characters
↳If 1 Word = 8 Characters
↳Then 128k Tokens = 16,000 words
So if a ChatGPT4 can understand/intake ~16,000 words at a time, Gemini Pro 1.5, can understand/intake ~500,000 words at a time 🙀
Its like your brain, you can probably process a 2 pager document very efficiently or maybe 10 but can't retain a 200 page book to interpret.
But these models are getting so good that they can intake a 600 page book and answer from it.
🚀✨ Welcome to Gemini 1.5: Our next-generation model with a context window of 1M tokens. → https://lnkd.in/diVR2fjV
Developers can explore the latest Gemini models, including Gemini 1.5 Pro, for reasoning across multiple large files, videos (image frames), or even an entire codebase. Get started in Google AI Studio.
#BuildWithGemini
🛰️ Configuration and Data Management (CADM) Analyst | Software Applications Specialist | Analytical Technology Enthusiast | 10+ Years in Digital Transformation
🚀 Big news! The new Gemini 1.5 - it's a game-changer for developers. With a huge context window of 1M tokens, it's making projects easier and more exciting. Let's get creative and #BuildWithGemini#WomenTechmakers#AI
🚀✨ Welcome to Gemini 1.5: Our next-generation model with a context window of 1M tokens. → https://lnkd.in/diVR2fjV
Developers can explore the latest Gemini models, including Gemini 1.5 Pro, for reasoning across multiple large files, videos (image frames), or even an entire codebase. Get started in Google AI Studio.
#BuildWithGemini
Last week marked my 17th Googleversary! The year I started, we had just launched YouTube in Australia and New Zealand, which is now home to some incredible local creators like Mike’s Mic, and to more than 300 other Australian channels with more than 1M subscribers - a sign of how much the creator community has grown!
I often get asked what’s kept me at Google, and I can honestly say that watching I/O this week was a pretty resounding answer.
If you’re not familiar with I/O, it's our annual developer conference - an opportunity for us to show the world the products and experiments our teams are working on and the research breakthroughs we're making. It’s a super proud moment for all of us!
This year was no different, as we gave folks a glimpse into some new products, and what the future will look like, powered by Google’s AI.
💻 One for the parents out there! On select Android devices, kids can now use Circle to Search to help them with their homework. When they circle a prompt they’re stuck on, they’ll get step-by-step instructions to solve a range of physics and math word problems.
📷 If you’re a Google Photos fan, this one is coming soon! You’ll be able to ask it to find exactly what you’re looking for, using simple everyday language. For example, I could ask: “Show me all my family photos from our trips to the Gold Coast.”
💫 And one for the future gazers! Imagine a world where you have your own AI assistant that can make sense of different sensory information, helping you make decisions in real time. Astra, which stands for “advanced seeing and talking responsive agent” will be able to do just that. Check out the video below.
If you’re keen to know more about what went down at I/O, head here → https://lnkd.in/g7qgFaYy
Attended Prof. Rajendra Singh (Rajju Bhaiya) University (PRSU), Prayagraj
2wThat's right, but when you speak in Google, Google doesn't work properly right now.