A team at Google has proposed using artificial intelligence technology to create a “bird’s-eye” view of users’ lives using mobile phone data such as photographs and searches.
Dubbed “Project Ellmann,” after biographer and literary critic Richard David Ellmann, the idea would be to use LLMs like Gemini to ingest search results, spot patterns in a user’s photos, create a chatbot and “answer previously impossible questions,” according to a copy of a presentation viewed by CNBC. Ellmann’s aim, it states, is to be “Your Life Story Teller.”
It’s unclear if the company has plans to produce these capabilities within Google Photos, or any other product. Google Photos has more than 1 billion users and 4 trillion photos and videos, according to a company blog post.
Project Ellman is just one of many ways Google is proposing to create or improve its products with AI technology. On Wednesday, Google launched its latest “most capable” and advanced AI model yet, Gemini, which in some cases outperformed OpenAI’s GPT-4. The company is planning to license Gemini to a wide range of customers through Google Cloud for them to use in their own applications. One of Gemini’s standout features is that it’s multimodal, meaning it can process and understand information beyond text, including images, video and audio.
A product manager for Google Photos presented Project Ellman alongside Gemini teams at a recent internal summit, according to documents viewed by CNBC. They wrote that the teams spent the past few months determining that large language models are the ideal tech to make this bird’s-eye approach to one’s life story a reality.
Ellmann could pull in context using biographies, previous moments and subsequent photos to describe a user’s photos more deeply than “just pixels with labels and metadata,” the presentation states. It proposes to be able to identify a series of moments like university years, Bay Area years and years as a parent.
“We can’t answer tough questions or tell good…
Read the full article here
Leave a Reply