Today I am diving deep into Freepik, one of the rising stars in the all-in-one AI platform world. Is it worth the hype? Absolutely! I'm doing a full platform tour and showing you exactly why Freepik is a game-changer, whether you're a seasoned AI pro or just starting your creative journey. I use Freepik, and it has earned a place in my workflow. I'm partnering with them for this video, but as always, I'm giving you the straight goods – no hard sell, just real insights.
Freepik's core is its powerful suite of AI generation tools, covering everything from stunning images to dynamic videos. But it's so much more than just that! I'll show you how it stands out from the crowd, especially if you're looking to move beyond platforms like Midjourney.
I'm putting Freepik's AI models to the test: Mystic (their own unique model), Google's Image Gen 3 (known for incredible prompt coherence), and Flux. We'll do a head-to-head comparison with a challenging prompt to see which model reigns supreme. Plus, I'll reveal one of Freepik's most exciting new features: integration with Magnific AI for mind-blowing creative upscaling! I also show the editor that allows modifications!
But that's not all! I'll also explore:
Image Editing: Built-in tools that rival a mini-Lightroom.
Outpainting: Easily change aspect ratios and expand your images.
Style Presets: A HUGE library of styles to transform your generations.
Custom Model Training: Create your own unique style and even train a character model (yes, I trained another AI version of myself!).
Video Generation: Access to ALL the major AI video models (MiniMax, Kling, Runway, Luma's Dream Machine, and more!). I will compare outputs, and demonstrate their unique feature.
Sound Effects & Lip Sync: Add sound effects directly to your videos and even utilize lip-syncing capabilities.
Design: A library of options for pring.
Vector: Options for illistration.
Templates: Many premade templates for a variety of applications.
I'll also break down Freepik's pricing tiers (including the free option!) and show you how it stacks up against the competition.
Hop On Freepik here: https://www.freepik.com/pikaso/ai-image-generator?utm_source=youtube&utm_medium=paid-ads&utm_campaign=theoretically.
AND: If you use the coupon code THEO you get 50 uses, and can try Freepik Premium for 48 hours!
(When you hit "Sign Up"
Chapters
00:00 Intro: The Rise of All-in-One AI Platforms
00:48What is Freepik?
01:04 Image Generation Models: Mystic, Image Gen 3, Flux
01:40 Model Shootout: Comparing Outputs
02:44 Image editing tools.
03:06 Magnific AI Upscaling
04:21 Retouching: Editor Mode
05:21 Outpainting and aspect ratio.
05:55 Style Presets
07:09 Training Your Own Style & Character
08:19 Video Generation: All the Major Models
09:19 Minimax vs Kling outputs
09:53 Pro mode on models, auto prompting.
10:20 Sound Effect Generation
10:49 Lip Sync Capabilities
11:53 Beyond AI: Design, Vectors, Templates
12:09 Pricing and Plans
Today I'm diving into SkyReels, a powerful new AI video model that’s free, open-source, and comes with its own robust platform. In this deep dive, we’ll walk through SkyRails’ unique features—from its human-centric training data to its text-to-video and image-to-video workflows. You’ll see real examples of prompt coherence, scene generation, camera movement, and the innovative “AI Drama” tool for episodic storytelling.
We also discuss the model’s performance benchmarks on an RTX 4090, how its platform pricing compares, and where Skyreels might head next.
Whether you’re an AI enthusiast, filmmaker, or just curious about the latest in generative video, this is the place to start!
If you find this video helpful, feel free to like, comment, and subscribe for more AI-driven content. Thanks for watching!
Key Topics Covered in This Video
Open-source release strategy and GitHub details
Text-to-video and image-to-video generation demos
Human-centric expression and prompt coherence
Built-in image generator and video editor
“AI Drama” episodic storytelling tool
Model performance (benchmarks & resolution)
Platform pricing and credit structure
Future developments in AI animation
LINKS:
Skyreels Platform: http://www.skyreels.ai/home?utm_source=kol&utm_campaign=id_TheoreticallyTim
Skyreels Open Source Model: https://github.com/SkyworkAI/SkyReels-V1
Skyreels A1 Open Source Model: https://github.com/SkyworkAI/SkyReels-V1?tab=readme-ov-file
CHAPTERS
00:00 – Introduction & Model Overview
00:19 – Open Source AI Video & Platform Approach
01:07 – Skyreel Model Specs & Performance
01:50 - Timing on Running Locally
02:03 - The Skyreels Platform
02:27 - Is that Me?
03:16 - Interesting Notes on the Video Models
2:50 - Some Additional Features on the Skyreel Platform
04:04 – Text-to-Video Demonstrations
05:01 – Image-to-Video Showcases
05:49 - Image to Video Test 2
06:38 - Image to Video, This was Insane
07:14 = Image To Video - Multi Character
07:58 - Camera Moves
08:37 - Storyboard Feature in Skyreels
09:15 - Image To Video - No Prompt
09:47 - Style Consistency: Anime and Animation
10:14 – AI Drama & Episode Generation
11:05- Pose Control!
11:59– Lip Sync & Expression Model
12:23 - Driving Video to Character Animation
13:06– Pricing & Credit Options
Google's Stunning AI Video Generator, Veo-2 has finally been released and it's FREE! Although, there are a few interesting catches!
For one, Veo-2 is available via the Youtube App, and is probably more aimed at Youtube Shorts. That said, I do have a solution capturing outputs so you can edit the videos on your own.
Also, is this a Turbo Model? I'll take a shootout between the Web Veo2 model and the Youtube version to see. Spoilers: It likely is.
LINK:
Star Wars: The Ghost's Apprentice: https://youtu.be/KWlxMC0j498?si=d643alvIrhrQVZI8
Chapters:
0:00 - Intro
0:35 - Veo-2 Has Released!
1:09 - Youtube Shorts?
1:18 - Getting Started With Veo-2
1:52 - Imagen-3
2:22 - First Shot
2:53- Some Limitations
3:20 - The Timeline
3:56 - Adding Music
4:19 - Uploading
4:37 - I don't like this
4:51- Test Short
5:04 - Is this a Turbo Version?
5:20 Testing the same Prompt
5:56 - How Much Faster Is It?
6:05 - Moderation Problems
7:02 - Animated Presets
7:24.- Is it Perfect?
Today we're diving into Luma Labs Dream Machines and their updated Ray2 model that’s pushing the boundaries of image-to-video technology.
In This Video:
• AI Image-to-Video Revolution: Discover how Luma’s Ray-2 model is setting the bar for image-to-video conversion—with stunning visuals and unexpected “happy accidents” that spark creativity.
• Google’s Imagen 3: See how Google is upping its game with new inpainting capabilities, making it one of the best free AI image generators available today.
• VFX Compositing Game Changer: Get a sneak peek at an upcoming model poised to transform VFX compositing, blending video elements seamlessly like never before.
• Creative Process Insights: Follow the demo featuring Midjourney-inspired visuals, detailed project boards, and camera control tips to achieve that Michael Bay cinematic style.
• Community Showcase: Enjoy a curated selection of impressive community outputs—from cyberpunk anime scenes to dynamic video compositing examples.
LINKS:
Luma Labs: https://lumalabsai.partnerlinks.io/dd1jzuzx6o87
Imagen3: https://deepmind.google/technologies/imagen-3/
Pika: https://pika.art/
Snap Research: https://x.com/AbermanKfir/status/1888987447763292305
CHAPTERS:
0:00 - Intro
0:37 - Luma Labs Image to Video Update
1:04 - Demo Video
1:27 - Luma Platform Walkthrough
2:00 - Midjourney Bayham
2:28 - First Generation with Luma's Image To Video
3:00 - More Generations
3:15 - Limitations and More Like
3:47 - Wonky Shots
4:04 - Camera Controls
4:52 - Motion and Outputs
5:30 - Community Outputs
6:00 - Animation
6:44 - Lunch on a Skyscraper
7:17- Imagen 3 Updates
8:21 - Inpainting
9:17 - DallE-4 incoming?
10:00 - Pika additions
10:31 - Another Example
11:01 - Jon Finger's impressive tests
12:15 - Snap Research - AI VFX!
A new AI VIdeo Model from Bytedance called Goku appears to be very impressive. Featuring a "Plus" Model that seems to be aimed at the advertising market, this one could be big for vertical content.
AI Video Just Leveled Up!
Learn more about Hostinger at https://hostinger.com/theoretically (Coupon Code: THEORETICALLY)
In this video, I dive deep into the latest breakthroughs in AI video technology and creative tools that are reshaping digital storytelling. We start by tackling the age‐old problem of “noodle bone” character movements with Meta’s new Video Jam solution, then move on to incredible VFX demos that augment real-life footage (yes—even paintings!). I also share insider scoops on Midjourney’s upcoming video advances, updates on Omni Human’s dancing avatar tech, Topaz Labs’ next‐gen upscaling in Project Starlight, and even a look at agile robotics training using real-world sports data. Plus, don’t miss the fun reveal of AI Granny—the chatbot fighting phone scammers—and a walkthrough of an innovative AI storytelling platform that lets you craft your very own narrative (complete with comic-style visuals).
My Thanks to Hostinger for Sponsoring Today's Video!
If you’re passionate about AI, video production, VFX, or futuristic tech trends, this video is for you!
LINKS:
VideoJam: https://hila-chefer.github.io/videojam-paper.github.io/
TopazLabs: https://topazlabs.com/ref/2518/
ASAP Robots: https://agile.human2humanoid.com/
LoreMachine: https://www.loremachine.world/
0:00 – Introduction & Overview
0:26 – Video Jam: Enhanced AI Motion
0:58 - How VideoJam Works
1:31 - Examples of VideoJam
2:37 - VideoJam Vs Other Models
4:00 - VideoJam Release?
4:34 - DYNVFX Video VFX Inpainting?
4:53 - DynVFX Examples
5:32 - Taking a closer look at DynVFX
6:29 - Midjourney and Veo2
6:47 - Tracking Upcoming Models
7:29 - Veo2 API?
8:07 - Building a Site With Hostinger
11:58 - OmniHuman Update
12:30 - Topaz Labs Stunning Upscaler
13:19 - ASAP Robots
14:46 - AI Granny
15:12 - LoreMachine: Ai Storytelling
15:55 - Character Image Generation
16:18 - Storyboards and Comic Books?
Thanks for watching – and as always, ship it!
Dive into the latest in AI creativity and innovation in this deep-dive video where I put OpenAI’s new Deep Research to the test! In this episode, I explore cutting-edge AI video generators, cost analyses for cloud subscriptions versus local hardware setups, mind-blowing lip sync avatar technology, and the most impressive AI rotoscoping I’ve seen yet. Plus, I share my firsthand experience from Adobe’s Gen AI Leadership Summit and a Comfy UI event at GitHub HQ!
LINKS
DEEP RESEARCH PDF Reports Here: https://theoreticallymedia.gumroad.com/l/DEEP
In This Video You’ll Discover:
OpenAI Deep Research Overview:
How OpenAI’s deep research release (powered by the latest model) is reshaping the creative AI landscape.
AI Video Generation Showdown:
A detailed report comparing popular tools (cling, runway, minimax, Gen three, etc.) on quality, pricing, and performance.
Viral AI-Generated Films:
Analyzing what made films like Airhead, The Heist, Queens of Avalon, and the AI-driven James Bond trailer go viral.
Subscription vs. Local Machine:
A breakdown of costs—including hardware specs (RTX 4090, Ryzen 9, and more!) and training time—to determine which is best for creators.
Next-Gen AI Avatars & Rotoscoping:
Check out the impressive Omni Human One lip sync demo and “Matt. Anyone”—the ultimate stable video matting technology.
Behind the Scenes at Adobe HQ:
My experiences and honest feedback from Adobe’s Gen AI Leadership Summit and the spirited discussions with industry pros.
CHAPTERS:
0:00 - Intro
0:38 - Open AI's New Deep Research
1:49 - What is the Best AI Video Generator?
4:58 - Research Results
5:41 - How to Make a Viral AI Video?
7:05 - Research Results
8:07 - Additional Questions
8:34 - What is more Cost Effective, AI Platforms or Local Models?
9:28 - Research Results
12:30 - OmniHuman Video Lip Sync & Avatar
14:53 - MatAnyone
15:49 - Adobe GenAI Leader Summit
17:42 - Comfy UI Meetup
A suprising new challenger to the AI Music Throne has appeared! In this video, we explore the surprise comeback of Riffusion, an AI music generator that’s challenging big names like Udio and Suno for the AI music throne. From instant full-track creation and advanced composition controls to remixing and extending your own uploaded audio, Riffusion offers a jaw-dropping blend of features—and it’s currently FREE (for now). Join Tim as he tests everything from hip-hop and country to power metal and personalized guitar riffs. If you’re passionate about the future of AI-generated music, you won’t want to miss this one!
00:00 – The King Is Dead!
00:22 – AI Music Evolution: Riffusion’s Return
00:39 – Let’s Hop in the Hot Tub…
00:58 – Riffusion Beta Overview
01:10 – Prompt Bar & Compose Tab
01:14 – First RiffFusion Prompt
01:40 – “July Can’t Come Fast Enough”
02:34 – Modern Country Shots Fired
02:53 – "Same Song at Ten"
03:39 – Hip-Hop Test
04:51 – Power Metal Coffee
05:33 – Ending the Song
05:59 – Instrumentals / Pushing Prompts
06:46 - Compose Tab (Advanced Mode)
07:25 – Blending Genres Test
07:59 – Uploading Audio
08:12 - Original Song
08:44 – Cover Song Prompt Overview
09:25 - Prompting Close To Genre
09:52 –Changing a Cover's Genre
10:31 – Inputting Your Own Music
10:40 - Little Peavey Wonder
11:00 - The Riff
11:20 - Riff To Fusion
11:39 - Riffusion Output
12:15 – Layering & Creative Possibilities
12:51 – Personalized Feature
13:12 – Free Beta & Final Thoughts
What I Cover
• AI Music Creation: How Riffusion generates entire songs in different styles.
• Advanced Composition Tools: Demoing the “Compose” tab, the Weirdness Slider, and multi-genre blends.
• Remix & Extend: Transforming public-domain tracks—or my own riffs—into something brand new.
• Instrumentals & Vocals: From orchestral and synthwave to power metal anthems.
• Practical Tips & Workflow Ideas: Layering AI outputs with original recordings.
Why Riffusion Stands Out
Riffusion’s unlimited free beta is a rare chance to experiment with AI music at no cost. Whether you’re a casual musician or a pro producer, now’s the perfect time to explore AI-driven music innovation.
Try Riffusion (Beta): https://www.riffusion.com/?utm_source=theoreticallymedia&utm_medium=youtube&utm_campaign=launch
Today we’re diving into two big updates in the AI Video generation space. First up is Pika’s new 2.1 model which not only offers outputs in 1080p, but touts more realistic movement and better prompt coherence. Does it live up to the claims, and how does it stack with Pika’s “Ingredients” feature?
Hailuo’s Minimax also has a cool game changer for Camera Control, and this isn’t motion brushes or sliders, it’s something much more interesting!
CHAPTERS
0:00 - Intro
0:26 - Pika Introduces Version 2.1
1:02 - What is in Version 2.1
1:15 - Hopping into Pika
1:42 - Testing Text to Video with 2.1
2:29 - An 80s Sitcom Text to Video
3:39 - 70s Spy Film, Text to Video
4:24 - Testing Image to Video in Pika 2.1
5:08 - Stylized Image to Video in Pika 2.1
5:35 - Prompt Direction in Pika 2.1
6:02 - It might take a re-roll
6:22 - Pirate Woman Example
6:44 - Overall Prompt Understanding
7:04 - How is Pika 2.1 With Ingredients?
7:41- Viking Cop is Awesome
8:01 - Getting Closer with Ingredients
8:28 - Where I think 2.1 and Ingredients Excels
9:04 - Community Outputs
9:43 - Minimax Releases Director Mode
10:01 - Director Mode Overview
10:23 Director Mode Examples
11:14 - Community Outputs
11:55 - Closing Out
AI Realtime Just Got Real!
Learn more about Hostinger at: https://hostinger.com/theoretically (Coupon Code: THEORETICALLY)
AI image generation is getting faster and more creative every day! In this video, we take a look at real-time AI image generation with Krea, an all-in-one AI generation platform. See how it works, what the limitations are, and how you can overcome them with a little ingenuity.
Krea's real-time AI image generator has been around for a while, but it's had one glaring problem: consistency. If you make any changes to your prompt, the entire image changes. This can be a problem when you're trying to create a specific image, such as a character or an object.
But Krea has solved this problem by allowing you to train up a model and then use it in the real-time module. This means you can now create images with consistent characters and objects, even if you make changes to your prompt.
We'll also take a look at how to use Krea's real-time AI image generator to create 3D objects. This is a really cool feature that allows you to create 3D models of your images.
Finally, we'll take a look at Film Agent, a multi-agent framework for end-to-end film automation in 3D virtual spaces. This is a really interesting project that could potentially revolutionize the way films are made.
Chapters:
0:00 – Introduction
Quick preview of today’s topics: real-time AI image generation in Krea, plus an open-source model for prompt-to-movie.
0:22 – Krea’s Real-Time AI Image Generation
Overview of Krea as an all-in-one platform with real-time text-to-image functionality.
2:14 – Training Custom Models for Consistency
Solving the biggest issue with real-time generation—keeping the subject consistent.
3:38 – Converting 2D Images to 3D
Demonstration of how Krea can transform flat images (like a jet fighter) into movable 3D objects.
5:19 - Me in 3D
6:11 - Some Tips on Sliders
6:49 – Building a 3D Character from Concept Art
Testing a Midjourney-inspired “Lara Croft”-style character sheet and training it in Krea for 3D manipulation.
8:33 Adding Elements into your Compostion
8:49 – Future Pose Control & Closing Thoughts on Krea
Discussing next steps, potential pose/rigging controls, and overall impressions of Krea’s capabilities.
9:39 - Hostinger
12:50 – Film Agent: Multi-Agent Prompt-to-Movie
Deep dive into Film Agent, an open-source framework that uses virtual “director,” “screenwriter,” and “actor” agents in Unity.
13:09 - How Film Agent Works
14:54 -Where Unity Comes In
15:29– Sora Remix & Visual Enhancements
Running Film Agent’s output through Sora to experiment with AI-driven style and consistency.
15:56 - Sora Remix
16:23 – Pika 2.1 Model Reveal
A quick look at Pika’s upcoming 2.1 release and why it has the AI art community excited.
Diving to the new Frames feature from Runway ML—an AI image generator built entirely by Runway, separate from models like Stable Diffusion or Flux. We'll look at how Frames offers cinematic image outputs, best practices for prompting, and showcases a variety of user-generated examples.
On top of that, he explores:
• Kling's New “Elements” Feature that allows up to four reference images and yields 10-second AI-generated videos. Tim shows how this boosts continuity in character designs and environments.
• A John Wick–Severance Crossover made possible by advanced LoRA (Low-Rank Adaptation) techniques, demonstrating how entire characters (not just faces) can be swapped seamlessly in video footage.
• Tribute to David Lynch: Tim takes a moment to acknowledge the legendary filmmaker’s influence and pays homage with a custom “Peak Lynch” style inside Runway’s Frames.
Throughout the video, Tim mixes in behind-the-scenes tips—like how to handle glitchy hands, text consistency, and the best ways to write prompts for cohesive results. He also reveals a handy GPT Prompt Builder tool (linked below) designed to help generate descriptive prompts quickly. Finally, Tim closes by teasing a new Minimax feature for reference-to-video that’s just dropped, promising to showcase more examples soon.
Timestamps:
00:00 – Intro & Overview of Runway’s Frames
00:58 – Diving into Frames + Prompting Tips
01:47 – Frames First Example
02:29 – Frames To Video
03:16 – Frames Second Example
04:08 - Frames Outputs and Vary
05:00 – Prompting In Frames
05:38 – Frames Prompt Builder
06:19 – Styles In Frames
06:44 - Your Own Styles
07:39 – More Cinematic Examples
08:09 – Community Outputs
09:23 - What I'd like to See In Frames
10:03 - Kling's Elements WOW
11:27 - Community Outputs With Elements
12:24 - John Wick Meets Severance
Links & Tools Mentioned:
GPT Prompt Builder https://chatgpt.com/g/g-678faa261f888191946af0ba95a374af-runway-frames-prompt-builder
If you found this video helpful, consider giving it a thumbs-up and subscribing for more AI art, video, and workflow tips! Feel free to share your experiments in the comments, and stay tuned for more updates on these rapidly evolving creative tools.
Vidu AI is a Sleeper Hit! Check out HubSpot's Free ChatGPT resources here: https://clickhubspot.com/7olo
In this video, I dive into the newly released Vidu 2.0 model, exploring its image-to-video, reference-to-video features, and overall performance. I compare it to Lumia’s Ray 2 text-to-video approach, as well as Sora’s evolving capabilities, highlighting where each model shines (and where they still struggle). Along the way, I run through fun experiments like blending my own portrait into cyberpunk scenes, creating crowd shots, and even conceptualizing an animated “zombie dogs” project. I also check out how the upcoming Kinetix Tech platform may change character motion control—especially if TikTok ever gets banned! Watch till the end for tips, tricks, and my honest thoughts on these emerging AI video tools.
If you find these demos helpful or inspiring, please hit that Like button, consider subscribing, and let me know in the comments what you’d like to see in future videos. Thanks for watching!
LINKS:
Vidu.AI: https://www.vidu.com/create
Video on Point Diffusion:
https://youtu.be/DVA8XghGmj4
Kinetix Beta Waitlist: https://www.kinetix.tech/sign-up
Today's Sponsor: Hubspot! Thank you!
https://clickhubspot.com/7olo
MINIMAX GPT Prompter (FREE): https://chatgpt.com/g/g-71Fq47Ec6-minimax-cinemotion-prompter
Chapters
0:00 Introduction
0:32 Vidu 2.0 Features & Comparing Versions
1:07 UI Walkthrough & Generating Options
1:43 Amplitude, Duration, & Motion Examples
2:22 First Example
3:01 Man in a Blue Business Suit Update
3:54 Combining Outputs (Luma Ray 2 & Vidu)
4:44 Wizard Orb & Prompting Examples
5:26 Vidu's Scene Understanding
5:49 First Frame Last Frame
6:27 Editing for First and Last Frames
6:55- Limitations of First and Last Frames
7:15 Crowd Scenes & Decoherence Fixes
7:51 Anime/Cartoon Style Tests
8:13 ‘Paws of the Dead’—Animated Zombie Dogs
9:07 - Hubspot's ChatGPT Resouces
10:46 Reference-to-Video is the solution
11:45 Improving Results with Rerolls & Blending
12:08 More experiments with References in Vidu
13:01 Rockstar Tim
13:35 Trying to "break" Reference Video
14:11 Sora Remix Experiments
14:46 Pros & Cons of Longer Clips
15:04 The Best Remix yet
15:29 Kinetix Tech for Motion Capture
16:37 - Pro Use?
17:23 - Closing
17:24 Wrap-Up & Reminders
In this video, I explore Luma Labs’ latest video model, Ray 2. Let's check out what this new era of Dream Machine will bring us. To note, this is a beta release, but it shows huge potential for text-to-video and image-to-video enthusiasts. I’ll also take a dive into the revamped Dream Machine updates and showcase community creations that highlight the full spectrum of what Ray 2 can do.
LINK:
Luma Labs: https://lumalabsai.partnerlinks.io/dd1jzuzx6o87
00:00 Intro: Ray 2 Unveiled
00:22 Dream Machine Origins and Evolution
00:49 Early Ray 2 Beta Features (1080p clips, 5-second limitations)
01:09 Test Clips: Deserted Island, Tigers, and… Unwanted Lighthouses?!
01:42 Realistic Animations: Walk Cycles and Water Physics
02:39 Aspect Ratio Options: Exploring Western and Pirate Themes
03:14 When Ray2 Hits, it Hits
03:47 The Best AI Godzilla I've Generated Yet
04:34 - Animated Styles With Ray 2
05:19 The New Image Model: Photon
06:27.- Cyberpunk Looks With Photon
07:04 - Community Outputs with Ray 2
08:14 What’s Next for Ray 2 and Dream Machine
🔧 Tech Highlights:
• Improved photorealism
• New aspect ratios (16:9, 21:9, and more)
• Enhanced animation and camera movement controls
I’m diving into some of the most exciting breakthroughs in AI video generation. First up is a mind-blowing technique called Diffusion as Shader (DAS), which merges 3D tracking with diffusion so you can seamlessly control subjects, locations, and even camera movements. Then, we explore two new game-changers from Adobe—one of them is open-source!—including TransPixar for text-to-video with built-in transparency and FaceLift for single-image 3D head reconstruction using Gaussian splatting.
I’ll also give you a rapid rundown of cool new updates from MiniMax, Runway, and Krea, like subject reference tools, 4K upscaling, and built-in audio generation. You’ll see how these tools push boundaries, from full-body animations to real-time 4D facial capture.
Chapters
00:00 – Intro
00:30 - A Breakthrough in AI Video (DAS)
00:51 - A Brief Primer on Diffusion and Shaders
01:59 – How DAS Solves 2D Animation Issues
03:08 – Benefits of 3d for AI Video
03:44 - Using DAS with FLUX Depth
04:22 – Video-to-Video Magic & Mesh Animations
05:26 – Adobe’s Open-Source TransPixar for Transparency
07:22 – FaceLift for 3D Heads & 4D Novel View Synthesis
08:44 –MiniMax Subject Reference Now Available and What's Next
09:15 - Gen-3 4K Upscale
09:46 – Krea's Video and Sound Generation
Ready to level up your AI video creation game? In this video, Tim dives into two major updates—Hailou Minimax’s new SVO1 “subject to video” model and Runway Gen3’s brand-new Middle Frame feature—showing you how to achieve more consistent character references and creative transitions in your AI-generated videos. You’ll see hands-on demos in Midjourney, tips for crafting better character sheets, and even a few “stupid” but surprisingly effective compositing tricks in Minimax. Plus, stick around to see how Runway’s latest functionality can help you keyframe animations and spice up your video intros in record time.
LINKS & RESOURCES
Minimax Prompt Generator (GPT): https://chatgpt.com/g/g-71Fq47Ec6-minimax-cinemotion-prompter
THE INTERVIEW Short AI Film: https://x.com/TheoMediaAI/status/1866160155139715248
CHAPTERS:
0:00 - Intro
0:17 - Minimax's New Subject Reference
1:07 - First Tests
1:31 - One Shot vs LORA
1:53 - AI Characters as References
3:01 - Male Character from The Interview
4:02 - Famous Faces
4:20 - Animated Characters
5:07 - Limitations of The Model
5:45 - Using a Character Sheet
6:09 - How To Make A Character Sheet in Midjourney
6:42 - Character Sheet Results
7:12 - Comping for Stylistic Results
8:23 - A Really Stupid Minimax Trick
9:23 - Runway Gen-3's New Middle Frame
10:05 - First Text
10:30 - Usage for a Title Sequence
11:17 - Is a Middle Frame Useful?
11:41 - Closing
2024 was a WILD Year for AI Video! Check out HubSpot's Free 1000 Prompts here: https://clickhubspot.com/fy4o
What a wild ride 2024 has been for creative AI! From groundbreaking launches to unexpected twists, this year was nothing short of incredible. In this video, I take you month by month through the biggest developments in AI technology. Whether it’s MidJourney’s evolution, OpenAI’s Sora, or the game-changing tools like Runway Gen-3 and Stable Diffusion 3, we’ve got it all covered. Don’t miss out on how AI shaped the creative world this year!
Thanks to today's sponsor, Hubspot! Download the 1000 Marketing and Productivity Prompts here: https://clickhubspot.com/fy4o
Chapters
0:00 – Intro: A Year of Creative AI
0:29 – January: Multi-Motion Brushes with Runway
0:55 – February: Sora’s Iconic Debut
2:24 – March: AI Music and Talking Heads Revolution
3:51 - Hubspot's 1000 Prompts
5:18 – April: Firefly Video and Dream Machine
7:00 – May: ChatGPT Voice Assistant Controversy
7:52 – June: Gen-3 and Dream Machine Go Head-to-Head
9:04 – July: Talking Heads and Quiet Progress
9:30 – August: Black Forest Labs Flux Dominates
10:29 – September: MiniMax Steals the Spotlight
11:10– October: Meta’s MovieGen and Spooky Updates
11:49 – November: Training Models Everywhere
12:07 – December: Sora’s Disappointing Release and the Future
13:39 – Outro: Thanks for an Incredible 2024
Let's dive into Recraft, the AI image generation platform making waves with its unique model! In this in-depth review, I'll explore everything Recraft has to offer creators and designers.
From generating stunning visuals with strong prompt adherence to advanced features like color adjustments, area modification, vector editing, and powerful image sets, I've got you covered!
Key features covered:
Red Panda (Recraft Version 3) Model
Prompt Adherence
Diverse Artistic Styles
Color Adjustments & Palettes
Area Modification (Lasso Tool)
Image Variations & Aspect Ratios
Vector Editing (SVG Export)
Image Sets (Storyboarding)
Creating Custom Styles
Check out Recraft and use code MEDIA12 for a $12 discount on all plans – https://go.recraft.ai/theoreticallymedia
0:00 Introduction to Recraft
0:33 Who Is Recraft For?
1:08 What was the Red Panda Model?
1:55 Creating Your First Image
2:57 Initial Results & Basic Adjustments
3:45 Advanced Editing: Adding Elements and Modifying Areas
4:28 Creating Image Variations and Different Aspect Ratios
5:21 Exploring Diverse Artistic Styles
6:27 Color Palettes and Style Customization
7:35 Vector Editing and SVG Export
8:30 Palette Options and Storyboarding
8:50 Using Image Sets for Storytelling
9:31 Mini Short Film
9:43 - Image Sets
10:27 Refining Image Sets and Adding Text
11:51 Creating Your Own Style
12:25 Final Thoughts
Google's Veo 2 Is Stunning! Check out HubSpot's Free ChatGPT resources here: https://clickhubspot.com/wwow
In this video, I dive into Google’s newly released Veo 2 AI video generation model, just one week after the arrival of another major player in the AI video scene.
How does Veo-2 measure up, and is it really the new king of AI video? I put it to the test with a series of prompts—from photorealistic island crash landings to eerie ’80s horror vibes, gritty sci-fi settings, and beyond. I’ll share insights into the UI, show off early outputs, and offer tips on prompting for better results. As this is an early-access look, the model still has quirks and limitations, but the leaps in video realism, character physics, and scene composition are undeniably impressive.
LINKS:
Google Veo-2 Waitlist: https://labs.google/experiments
Google Labs Discord: https://discord.gg/googlelabs
Hubspot: https://clickhubspot.com/wwow
Chapters:
0:00 - Intro
0:25 - Setting the Stage
1:09 - Veo Interface
1:28 - First Run
2:16 - Multiple Outputs
2:25 - Video Alternates
3:00 - Abstract Prompts
3:18 - Basic Prompts - 80s Horror Movie
3:44 - Sci-Fi Prompts and Movement
4:27 - Character Design
5:03 - Hubspot ChatGPT resources
6:19 - Veo 2 Prompt Formula
7:00 - Prompt Formula Results
7:27 Other Findings
7:49 Parkour Movments
8:37 - Fight Scenes
9:27 - Image to Video
10:07 - Learning Curve
10:45 - What we Normally Get
11:49 Veo-2 Vs Sora
12:23 Features I'd like to see
13:00 Tips to get access