The AI-powered, Web3 Newsroom
About the project
Algo is a data-visualization and innovation studio specialized in video automation.
In this project, we built an automated social media newsroom that turns any relevant update about crypto & web3 into bite-sized videos for social media, thanks to AI.
It’s generative system / motion toolkit able to generate infinite videos in output, always covering the relevant news of the day.
The system is run by AI, with a little help from a very small editorial team.
Video template
What sparked this project?
The client got interested in our approach to data-viz & video automation and they got in touch. Their objective was very ambitious: building one source of truth for the web3 world. A social media profile that would post news before everyone else by accessing the best data sources in the industry. A newsroom that would run itself with AI, with minimal editorial support.
Building such an ambitious project required 4 months of work, connecting 5 different data APIs and designing, animating & automating 7 unique video templates.
For comparison, our most common projects require a couple of months and include 1–2 templates, on average.
Who was on the team for the project?
Our team on this project was made of:
Luca Gonnelli — Co-founder & Creative Director
Camille Pagotto — Art Director & Designer
Marco Oggero — Lead Animator & Automation Engineer
Nima Farzaneh & Mattia Giordano — Data Science & Web development
Marina Echer Barbieri — Project Manager
Ornella Felizia & Mara Salazzari — Portfolio
Do you have some project metrics to share?
We've created 7 templates, about 750 videos created each month, each in 3 different aspect ratios (1:1, 9:16, 16:9).
Some are auto-triggered in real-time (by something happening in the data), some are triggered every week, or just manually.
Auto-triggers allow Algo to automatically create a video in real-time:
→ when a coin (like Bitcoin or Ethereum) goes up
→ when an article in your niche is trending
→ when a tweet (within a user list) is going viral
For these, the time from data received to video posted is less than 2 minutes. No human team can be that fast.
What is your approach to working on a project like this? Do you follow a specific process or framework?
Creative Phase
Strategic Workshop
We start with a strategic workshop where we write the brief with the client and outline the storylines that each template should convey to meet their goals.
Data Analysis
Then we search for the data sources that will allow us to build the project.
Data Storyboard & Dashboard Mockup
We streamline every scene of the video with sketches, highlighting the variable data in each scene. Then, we design a mockup of the interface the client will use to launch videos.
Design
We define the style of the video through moodboards & style-frames. Then, we design the templates in Figma, using variables and all tolls that help us take data variability into consideration.
Animation & Video Pilot
We complete the creative phase by adding animation in After Effects, then music and sound. We then create –still completely manually– two or three versions of the video customized with real life data. We call these Video Pilots.
Technical Phase
Dashboard Dev & Data Integration
We kick-off the technical phase by building & giving the client access to the dashboard & integrating all the data in input as well as the video delivery in output.
Video Automation
It’s time to code the video pilot and transform it into a Template that speaks with the data and can be easily customized by our Algo platform. We do this directly within After Effects or using the library called Lottie.
Cloud Setup & Testing
We set up the dashboard & templates in our cloud infrastructure and start rendering videos. Both our team and the client test the solution to find & solve bugs.
Go Live
The automated video campaign is live! The client receives the AI-generated videos and can create their own. During the whole live period, we provide support to make sure the campaign runs smoothly and hits the objectives.
Each phase corresponds to a week approximately.
What did the early versions of this project look like? What did you learn from this v1?
Camille, art director & designer: Very early versions of the design were more brutalist than the final result, more ‘backstage’ style. A sort of collage mix of vector icons, gifs, images and gradients. We also tested a bit more crazy stuff like an escher-style rotating typography in 3D. All of these were interesting to test, but, visually, it was going in too many directions at the same time. So, in the end, we decided to go with a more focused option that uses gradients as the main character of each template.
'The biggest challenge from a design point of view was that our work needed to include, for example, NFT visuals from very different collections and styles.'
What was the biggest challenge? Did any part of the project make you step out of your comfort zone?
The biggest challenge from a design point of view was that our work needed to include, for example, NFT visuals from very different collections and styles. We needed to build a container, a unified branding and template that could work well with images from different platforms while still being friendly and recognisable. We had to work with a lot of different visual codes, while creating our own.
Color adaptations in function of the content are one of the main ideas that helped us. In the NFT templates, for example, colors adapt & change in function of the NFT collection, creating the best possible match, visually.
Early iteration
How did you overcome this challenge?
In general, we looked for references about how to mix different types of content — like images, graphs, memes, trying to decode the visual language used by the blockchain community.
We looked around for collage designs, how to mix typography and images. And also strong color gradients references.
What and/or who inspired you during the creation of this project?
In general, we looked for references about how to mix different types of content — like images, graphs, memes, trying to decode the visual language used by the blockchain community.
We looked around for collage designs, how to mix typography and images. And also strong color gradients references.
Here are some of our references:
What was your biggest learning or take-away from creating this project?
Camille; We always think that the perfect design is based on limitations. A limited color palette, limited types of elements or visuals, limited style. Especially as we wanted to give the brand a strong personality.
Working on this project made us rethink this assumption a bit. When working with such a wide range of data / styles / output destinations, we now think that embracing it might be more effective than fighting against it. For example, a larger palette that adapts to the NFTs and Web3 language helped us a lot.
Things are changing fast today, and you need to find clever solutions to adapt. One design can be cool one day, and the day after everyone is working in this style which makes it already “deja-vu”. Staying consistent with your topic and anticipating all the needs you’ll potentially have in the future is crucial.
Can you point out a detail in the project that might go unnoticed but you’re particularly proud of?
Camille; I let the tech team answer this, because design is always the most noticeable thing, but a lot is going on in the dark 😀
Luca; What excites me the most is when an important news comes out all of a sudden and the system generates a perfect video covering it, autonomously. Seeing the AI do its magic + our motion design system put to good use is something that makes me incredibly proud of the work of our team.
Our ultimate goal with Algo is always to create videos that are indistinguishable from human-generated ones. When that happens, the technology disappears and it’s pure magic.
'Our ultimate goal with Algo is always to create videos that are indistinguishable from human-generated ones. When that happens, the technology disappears and it’s pure magic.'
Which part of this project consumed the most time or energy?
Searching data sources that cover the whole web3 sector, finding the one that works best for us by trial and error. Then, integrating 5 different APIs, studying their documentations, their pros & cons, their inconsistencies between one another, what subscription to pick for each of them…
This has surely been one of the biggest challenges we faced building the system. I remember our tech team spending many weeks scumbling through innumerable sources to find what was exactly right for our needs (and cost effective for our client).
What was the result of this project?
The project is just getting started. It’s now going through testing and will be fully rolled out in the upcoming months. I really hope it’ll touch as many people as possible. It’s surely one of our biggest & most relevant case studies to date — I’m sure many of our projects moving forward will be influenced by this.
Where was the project created? What do you enjoy about working there?
Our team is based in Turin, Italy. We share a beautiful office space with our sister studio, illo (https://illo.tv). We have a half remote, half in studio policy since we can’t deny that sharing a physical space helps spark serendipity and unexpected creative connections. Our designer Camille is fully remote tho, working from sunny Marseille, France. Joining us during team retreats or for special projects.
Office entry
Our workspace
Which tools did you use to create this project?
Figma — including the new variables, for anything design and collaboration
Google Slides — for presentations & meetings
Adobe After Effects — for animation and automation
Our proprietary dashboard and cloud render farm
The project data were coming from:
NTFGo — for all NFT data
Messari — for crypto prices
Feedly AI — to auto pick trending articles
Whale-alert.io — for whale transactions
Twitter API — for twitter data
Video template
What are you currently working on, and what's next?
About this project
This project might continue into an even more ambitious step: building a chatbot anyone can talk to and ask questions about the web3 world. The AI chatbot would respond in text, as ChatGPT would do, but it’ll also have the ability to respond with videos created in real-time.
Videos tell a story and can make something complex simpler. We, humans, use videos everyday in our chat conversations. It’s time for AI chatbots to use videos too, after all.
Still in the web3 world, we’re about to go live with a Wallet Wrapped project, where we create a video recap of your on-chain activity of the last 30 days. A unique video for every user of an amazing wallet app. Soon in portfolio.
Currently, we’re also working on a huge project helping a major tech company restyle and automate their data-viz and news operations. Under NDA unfortunately.
Who or what are you inspired by lately? Any current influences that you find are seeping into your work
Synthetic graphic design (such as Jerbosman & Damonxart) and creative coding are our main influences these days. The team is recently inspired by the generative design capabilities of Cavalry, a new animation software we’re testing out and would love to include in our toolbox. Especially the work of motion designers like Pepkomotion, Pvonborries and Valerio di Mario.
If you could give your younger self one piece of advice about navigating the design world, what would it be?
Just start. Don’t let any fear or negative thoughts hold you back. You don’t need that degree. Stay debt–free as much as possible. You don’t need that previous experience to apply to that job position. Everyone is learning online anyway. Don’t spend too much time on social media. Avoid comparing yourself to others, if possible. It’s a long road. It’s normal to hate your previous work. Always nurture your curious mind.
From the maker