Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
Ayer — 2 Abril 2025Theoretically Media

Runway COOKED for GEN-4! Ultimate Deep Dive & Review!

1 Abril 2025 at 21:14

💾

Today, we're diving deep into RunwayML's Gen 4, the latest update to their AI video generation model. Runway claims that Gen 4 boasts improved fidelity and output quality. More importantly, Gen 4 introduces better consistency in character, location, and color grading.   While the highly anticipated reference feature isn't available at launch, there's still plenty to explore. I'm putting Gen 4 through its paces with a series of tests, checking out community outputs, and sharing some best practices.   We'll take a look at everything from image to video generation, character consistency, and even some fun experiments with text and effects I’ll also be comparing Gen 4 to previous Runway versions like Gen 3 and Gen 2.   Runway's Gen 4 is a significant step forward, and I'm excited to share my findings with you. There are still more updates on the way. So, buckle up, and let's jump into the world of Gen 4!   RunwayML: https://runwayml.com/ Runway's Frames Video: https://youtu.be/7jwSNb4qq_E 00:00 - Intro to Runway Gen 4 00:29 - Gen 4 Overview and Key Features 01:17 - Gen 4 at Launch 01:48 - Frames to Video (Man In the Blue Suit) 01:59 - First Video Test 02:24 - Frames Image Generation 02:44 - London Test 02:58 - Gen 4 vs. Gen 3 vs. Gen 2 04:02 - Gen-4 Text (Woman in the Red Dress) 05:13 - Impressive Wide Angle Shot 05:48 - Gen 4 Wonk 06:29 - Expand Feature With Gen-4 07:00 - Fight Sequences (Raging Bull) 07:32- Noir Test & What Reference Might REALLY Be 08:15 - Noir Test 2 08:42 - How Powerful Will Reference Be? 09:55 - Image to Video 10:29 - No Prompt with Image To Video Test 11:07 - Working with Multiple Faces 12:13 - Grandma with a Flamethrower 12:21 Style Consistency with Image to Video 12:52 - Cyberpunk Woman 13:55 - Gen-4 To Restylize 13:59 - Community Outputs Showcase 15:58 - Final Thoughts on Gen 4
AnteayerTheoretically Media

This AI Image/Video Platform Opens Up So Many Possibilities (And You Can Try It For Free!)

31 Marzo 2025 at 15:55

💾

Let's take a tour of one of the most interesting AI generation platforms I've come across: Flora. If you're into AI art, image editing with Gemini, and even video generation, this platform has it all.   I know the workflow might look a bit intimidating at first – all those Blocks and connections! But trust me, it's not as scary as it seems. I'll walk you through the basics, from generating images with text prompts to using cool features like style reference and video-to-video generation.   We'll be playing around with Gemini's editing features, creating our own character references, and even training a custom style. Plus, I’ll show you how to take those images and turn them into videos!   Whether you're into clean, organized workflows or a bit of creative chaos, Flora has something for you. And the possibilities? Let’s just say they got me thinking about fight sequences and character turnarounds!   Check out Flora for yourself! They offer 2000 free monthly credits so you can explore the platform and see if it’s a good fit for you.   LINKS: FLORA: https://tinyurl.com/floratim CHAPTERS 00:00 - Intro to Flora: An AI Playground   00:28 - The AI Murderboard Isn't Scary 00:52 - Understanding the Workflow   01:19 - Text to Image Generation with LLMs   01:55 - Image Generation Options 02:42 - Video Generation Options 03:21 - Organization on a Canvas  03:53 - Gemini's Image Editing Features   04:46 - Flora Styles 05:25- Cinematic Crime Film Style 06:30 - Training Your Own Style   07:33 - Testing Out a Custom Style 08:27 - Video Generation Tests 08:57 - Off Book Video to Video Generation   09:50 - Character Edits With Gemini 11:26 - Impressive Turnarounds! 12:37 - Thoughts on Turnarounds Use Cases 12:57 - Advanced Flora Technique 13:25 - Storyboarding with AI   13:54- Flora's Free and Paid Plans 14:12 - Next Up for Flora

Behind The Scenes Of An AI Film (Masterclass & Cost Breakdown!)

27 Marzo 2025 at 22:36

💾

Hey everyone! I'm super excited to finally share a full production breakdown of my AI short film, "The Bridge." The response has been incredible (almost 400,000 views across platforms!) and I'm here to spill all the secrets on how I made it. In this video, I’ll walk you through the entire process, from pre-production and AI tools, all the way to final post-production. I’ll also dive into the costs and compare them to traditional filmmaking.   If you haven’t seen the short yet, don’t worry, I’ve included it in the video! My Thanks to Recraft for Sponsoring Today's Video! Check out Recraft – https://go.recraft.ai/media. Use my code MEDIA for $12 off any paid plan. LINKS: Jason Zeda's Wu-Tang Clan Video: https://youtu.be/ZBTb_xJBh5c?si=kqZPfc0h30F2oTBN Henry’s Prompt Tweet: https://x.com/henrydaubrez/status/1894513057109348405 Google Labs Discord: https://discord.gg/googlelabs Topaz Upscale: https://topazlabs.com/ref/2518/ ElevenLabs: https://try.elevenlabs.io/w5183uo8pydc Hume: https://www.hume.ai/ Hedra: https://www.hedra.com/ CHAPTERS 00:00 – Introduction   00:37 - The Bridge Views 01:20 - The Bridge 03:37 – Pre-Production   04:11 – AI Tool Selection   05:10 - Re-Inspired 05:31 – Prompt Engineering   06:13- Working with an LLM on a Film 06:33 – Veo-2's Super Power 07:50 – Achieving Visual Consistency   09:10 – Audio Production   09:52 – Lip Syncing with Hedra   10:32 - Upscaling 11:16 – More Upscaling 11:52 - Making a Poster w/ Recraft 13:23 – Post-Production in Premiere Pro   13:55 - Death's Voice 14:40 – Is it Perfect? 15:25 - Cost Breakdown 17:01 - Comparison To Traditional Filmmaking 18:11 – Make Movies 18:55 - It's just not there yet

OpenAI's Stunning Image Model Has Some Cool Tricks!

26 Marzo 2025 at 21:21

💾

OpenAI has dropped a brand NEW AI Image generation model, and it's NOT Dall-E 4! In this video, I'm taking you on a deep dive into this remarkable AI Model and showing you EVERYTHING it can do. We'll explore its impressive capabilities, uncover some hidden features, and yes, even talk about its quirks. Is this the end for avocado chairs?! In this video, I cover: * OpenAI's new AI image generator (it's not Dall-E!) * Image generation AI tools and comparisons * AI art techniques and workflows * Text-to-image AI generation * AI character design and consistency * Sora AI video generation updates * AI art community highlights * New AI tools and software This new AI is seriously impressive, but it also has some limitations. I'll show you what I've discovered in my testing, including how it handles text, image referencing, and creating consistent characters. Plus, we'll take a look at the latest Sora updates and some incredible AI art from the community. If you're interested in AI art, image generation, or just the latest in AI tech, you NEED to watch this! LINKS: THE BRIDGE: https://youtu.be/YDlME4qvER8 OpenAI: https://openai.com/ Reve: https://preview.reve.art/app Ideogram: https://ideogram.ai/ #AI #ImageGeneration #OpenAI #Dalle #Dalle3 #Sora #ArtificialIntelligence #TechReview #AIArt #Stablediffusion #Midjourney #AItools #TexttoImage #CreativeAI #aimusic Chapters: 00:00 - Intro: OpenAI's New Image Generator - Is Dall-E Dead? 00:18 - Quick Announcement: My AI Short Film - The Bridge 00:41 - Goodbye Dall-E? First Look at the NEW AI Model 01:28 - Image Generation Tests: Blue Suit Guy, Samurai & More! 02:11- Controls and Options 03:04 - Samurai Test 03:15 - Clown with a chainsaw 03:45 - AI Challenge: Complex Prompts with the Woman in the Red Dress 04:17 - Remixing for Different Angles 04:49 - Creative AI: Underwater Scenes, 90s Nostalgia & VHS Tapes 05:24 - The VHS Tape 05:48 - Text Generation is INSANE! Alan Wake Novelization Test 06:56 - GTA 7 Box Art Test 07:52 - Image Referencing EXPLAINED: How to Use Reference Images 08:04 - Scrambling Faces 08:26 - Not John Wick 09:04 - Multiple Image References 10:24 - Illustrated to Photo with Image Reference 11:13 - Sora Time 11:48 - Community Outputs 13:04 - More AI Tools You NEED to Know: Reve, Ideogram 3.0

THE BRIDGE: A Stunning AI Film Created with Veo-2.

24 Marzo 2025 at 16:21

💾

Presenting: The Bridge. An AI Short film utilizing Google’s Veo-2. I’m really proud of this one, as my goal (as always) is to push storytelling, performance, and narrative in this emerging art form, and I feel like I managed to pull something close that off here. Every shot here utilized Veo-2, although there were a few post-generation tricks here and there. I’ll cover everything in detail in an upcoming video. In the meantime, I hope you enjoy the short.

Adobe Goes AI Dirty with Flux, Runway, & Google!

20 Marzo 2025 at 21:41

💾

Adobe AI & Flux! Build your own web app with Hostinger Horizons: http://hostinger.com/tmedia & use Code TMEDIA for 10% off! In this video, I dive deep into some HUGE updates in the world of AI image and video generation! Adobe is surprisingly opening up its ecosystem to third-party models like Black Forest Labs (Flux), Google's Imagen 3, and RunwayML's Frames – right inside Adobe Express and Project Concept! I explain what this means for creators, especially the ability to finally "get dirty" with models beyond Firefly. Plus, I'm testing out Stability AI's brand new FREE virtual camera tool and showing you how it could revolutionize how we control camera angles in generated content. I even use it in conjunction with google's Gemini and Runway! Key Topics: Adobe's surprising partnerships with external AI model providers. What Flux, Imagen 3, and Runway Frames bring to the Adobe ecosystem. The implications for Photoshop and Premiere Pro (and what about Sora?). Hands-on with Stability AI's Stable Virtual Camera. Combining Stable Virtual Camera with Gemini 2.0 and Runway Gen 3 for enhanced control. Project Concept Beta Waitlist: https://concept.adobe.com/discoverStability AI Virtual Camera (Hugging Face): https://huggingface.co/spaces/stabilityai/stable-virtual-camera Previous Video (Gemini 2.0): https://youtu.be/llvyFBTyiGs Chapters 00:00 - Intro: Adobe Enters the Black Forest! 00:25 Adobe Embraces Third-Party AI Models 01:18 My Experience at Adobe's AI Summit 01:45 Adobe's New Partnerships: Flux, Imagen 3, Runway Frames 02:22 Where These Models Will Appear First 03:14 The Interesting Inclusion of Video: Veo2 04:04 - Potential Cost Savings with Integration 04:26 - Hostinger Horizon AI Web App Builder 08:27 Stability AI's Stable Virtual Camera 08:28 How to Use Stable Virtual Camera (Hugging Face Demo) 10:36 Limitations and Research Preview Status 10:48 Combining with Gemini 2.0 and Runway Gen 3 11:15 The Future of Camera and Subject Control 11:48 - A real look at what is coming

Google's FREE AI Image Game Changer Is KILLER

17 Marzo 2025 at 21:39

💾

Is Google's Gemini 2.0 about to revolutionize image generation and editing? In this video, I'm diving deep into Google's latest AI release and explore its powerful capabilities. Some are calling it a Photoshop killer, but is it really? We'll break down what Gemini 2.0 CAN do, its limitations, and how you can start using it for FREE right now! We'll cover: Image generation quality and prompt coherence   Editing existing images (even Midjourney!)   Creating image sequences for AI video   Generating consistent characters for AI training (Laura)   Cool community examples & use cases!   Sneak peek at upcoming video generation features!   Whether you're an AI enthusiast, content creator, or just curious about the future of image technology, this video is for you! 👇 Links & Resources: Google AI Studio: https://aistudio.google.com/prompts/new_chat 👍 Like, Subscribe, and hit the notification bell for more AI content! Community Outputs Bilawal Sidhu: https://x.com/bilawalsidhu Victor M: https://x.com/victormustar Min Choi: https://x.com/minchoi Aiba Keiworks: https://x.com/AibaKeiworks Umesh: https://x.com/umesh_ai #Gemini2.0 #AIImageGeneration #AIArt #GoogleAI #Midjourney #Dalle3 #AITools #FreeAI #PhotoshopKiller #ArtificialIntelligence CHAPTERS 0:00 - Intro 0:31 - Gemini Goes Multimodal 0:45 - This is What OpenAI implied 1:33 - How To Access AI Studio 1:55 - Rate Limits 2:15 - Image Tests With The Man In a Blue Business Suit 3:20 - Generating Video With Luma Labs 3:38 - Cinematic Angles with Midjourney Images 04:31 - Image Fidelity Loss and How To Overcome 05:17 - Things can still be wonky 05:30 - Tips and Tricks With Rerolling 06:19 - Using Real Photos 6:37 - Using This For Video Keyframes 7:14 - Using 3 keyframes in Runway 7:35 - Speedramping 8:02 - The Gamechanger for LoRAs 9:02 - Community Outputs 10:27 - Video is Coming!

AI's Next Horizon Is Real Time Game Characters

14 Marzo 2025 at 13:54

💾

Is AI the future of gaming? Sony's AI demo sparked a huge debate! We're diving deep into the tech, the controversy, and how YOU can create even BETTER AI-powered characters using readily available tools. Join us as we explore the exciting (and sometimes hilarious) possibilities of AI NPCs! In this video, we'll break down: Sony's AI demo and why it got mixed reactions. The technology behind the demo: OpenAI's Whisper, ChatGPT, Llama, and more. How to create your own AI characters with realistic voices and lip-sync using tools like Hume and Hedra. The future of AI in gaming, including real-time reskinning and interactive NPCs. Get ready to see how AI is transforming the gaming landscape! Thanks to Hume for Sponsoring Today's Video: Go get some Free credits: Hume AI: https://try.hume.ai/yooyhnds8jtd Links: Hume AI: https://try.hume.ai/yooyhnds8jtd Hedra: https://www.hedra.com/ Krea.ai: https://www.krea.ai/ Magnific: https://magnific.ai/ Previous Hume Video: https://youtu.be/KQjl_iWktKk Tyyle's Cyberpunk Reskin: https://x.com/tyyleai/status/1898483733243502713 Runway Restyle Video: https://youtu.be/EsPGN6NdIgM 00:00 - Intro 00:21 - Sony's AI Controversy 00:53 - Horizon Zero Dawn: A Brief History 02:01 - The Demo Clip 02:25 - A Note on Fair Use 02:44 - Analysis of the Demo 03:07 - Tech Behind the AI Demo 03:34 - Face Animation and PS5 Capabilities 04:07 - This is a Glimpse of What Is Possible 04:34 - Building a Higher Fidelity AI Aloy 05:38 - Upscaling With Magnific 05:50 - AI Voice Generation 06:18 - Creating Original Voices With Hume 07:16 - The Horizon Zero Dawn Audiobook with Hume 07:40 - Hume Pricing (Free Tier!) 08:14 - Creating the Conversation 08:28 - Insane Lipsync with Hedra 08:42 - Hedra Quick Demo 09:06 - My Convo with AI Aloy 10:02 - Differences 10:13 - A Glimpse Into the Future 10:54 - Outro

This free AI Speech is Wild! Acting & Emotion Performances!

11 Marzo 2025 at 14:45

💾

Today I'm looking at the new AI text-to-speech model, Octave from Hume, and I'm pretty impressed. If you're like me and want AI voices that actually have some feeling behind them, you're going to love this. Plus, the free tier is super generous, and the paid options are incredibly affordable. Let's check it out! In this video, I'm giving you a full walkthrough of Octave, showing you how it balances linguistic accuracy with genuine emotional understanding. We'll listen to some demo voices, create our own custom voices, and even test out the project features for longer content. Get ready to say goodbye to flat, robotic AI voices! LINKS: HUME: https://try.hume.ai/yooyhnds8jtd Key Highlights: Octave by Hume: A new text-to-speech model that focuses on emotional understanding. Generous free tier: 10,000 characters and unlimited custom voices. Affordable paid plans: Perfect for creators of all levels. Customizable voices: Create unique voices with specific tones and accents. Project features: Ideal for longer projects like audiobooks and podcasts. SEO Keywords: AI voice, text to speech, AI speech, voice generation, AI voice generator, Hume Octave, AI audio, voice design, AI podcast, AI audiobook. Here’s what we cover: 00:00 – Introduction: The Problem with AI Voices and Octave's Solution 01:16 – Octave in Action: Demo and First Impressions 02:04 – Exploring the Hume Interface: Quick Start and Speech-to-Speech 03:51 – Diving into the Octave Playground: Preset Voices 04:11 - Preset Voices Continued 04:53 – Custom Voice Design: Creating Your Own Unique Voices 05:36 - Name The Quote! 05:56 - Interesting Trick Quoth The Raven 06:53 – Advanced Text Prompts: Enhancing Voice Performance 07:33 - Acting Results 08:14 – Project Features: Octave For Longer Content (Audiobooks, Podcasts, etc.) 08:32 - Alan Wake The Audio Novel Backstory 08:58 - Alan Wake The Audiobook Sample 09:22 - Additional Trick with Projects 10:19 – Pricing and Plans: Affordable AI Voice Solutions 11:07 – Future Updates and Final Thoughts: What's Next for Octave What do you think of Octave? Have you tried it yet? Let me know in the comments below! Don't forget to like this video and subscribe for more AI content. And if you know a great lip sync model, please tell me about it!

Runway’s New Restyle Smashes AI Video!

6 Marzo 2025 at 22:50

💾

Runway's AI Video Gamechanger! Check out Hubspot's Complete Claude Guide here: https://clickhubspot.com/tqk6 Runway Gen-3 just dropped a GAME-CHANGING feature: Style Transfer for video! This lets you take the style of ANY image and apply it to your video's first frame, influencing the entire clip. We're diving deep into how it works, exploring its limitations, and uncovering some powerful workflows you can use RIGHT NOW. I'll show you how to combine Runway with Midjourney, Magnific AI, and even Runway's OWN incredible tools to achieve incredible results. Plus, I'll reveal Runway's secret weapon for PERFECT lip-sync and performance capture! Key Features & Topics Covered: Runway Gen-3 Alpha: The latest update and how to access it. Style Transfer: A complete walkthrough of the new feature. Midjourney Integration: Using Midjourney's "re-texture" for style creation. Magnific AI: Leveraging Magnific's style reference capabilities. Runway ACT-1: Combining style and transfer with performance capture. Workflow Optimization: Tips and tricks for best results. Limitations & Troubleshooting: Understanding the tool's current boundaries. Creative Examples: From Twilight Zone to A Trip to the Moon and beyond! Say Motion Integration. Gen-1 vs Gen-3 Comparison LINKS Runway: https://runwayml.com/ Midjourney: https://www.midjourney.com/ Magnific: https://magnific.ai/ Say Motion: https://www.deepmotion.com/saymotion 00:00 - Intro: Runway Gen-3's Style Transfer Revolution! 00:24 - How Style Transfer Works in Gen-3 Alpha 00:54 - First Test: Twilight Zone Reimagined 01:44 - Midjourney Re-Texture Workflow 02:51 - Structure Transformation Setting Explained 03:20 - A Trip to the Moon in Steampunk Style 04:32 - Using Footage of Tuesday, and Magnific AI 05"01 - Magnific Style Transfer 05:40 - Results 06:24 - Fixing Lip Sync & Movement with ACT-1 06:44 - The Full ACT-1, Gen-3, and Midjourney WORKFLOW. 07:11 - Hubspot's New Claude Ebook 08:37 - What Not To Do (Hilarious Fail!) 09:22 - Experimenting with Say Motion 10:38 - Gen-1 vs. Gen-3: The Evolution of AI Video

This Open-Source Ai Video Generator Nailed It!

27 Febrero 2025 at 23:25

💾

Is free and open-source AI video finally here and actually impressive? We're diving into WAN 2.1 from Alibaba, the new video model making waves for its realistic physics and cinematic potential! Plus, we test out Topaz Labs' Project Starlight video upscaler (also FREE!) on AI footage and explore Luma Labs Dream Machine's new video-to-audio feature! In this video we cover: WAN 2.1 Revealed: We explore the features of WAN, from text-to-video and image-to-video to upcoming video-to-video capabilities and its impressive 14B model. Is "WAN" the new X-factor in AI video generation? Hands-On WAN Testing: See real examples generated on platforms like Fal, Nim and Krea. We analyze text-to-video quality, aesthetic style, and frame rate. FREE Upscaling Magic: Topaz Labs Project Starlight beta is FREE right now! We test its ability to enhance AI-generated video and reveal the surprisingly subtle (but powerful) results. Luma Dream Machine Audio: Video-to-audio is HERE! We experiment with Dream Machine's new audio feature, adding sound effects and even gibberish talking to AI videos – is it mind-blowing or just plain fun? Open Source & Affordability: WAN is open source! We discuss the implications and the accessibility of this powerful new model, including hardware requirements and generation speeds compared to Sky Rails. Whether you're a seasoned AI creator or just curious about the latest breakthroughs, this video is your essential guide to WAN, Topaz Labs Project Starlight, and Luma Dream Machine's audio update! Mentioned Links (with Dummy Slugs): WAN Code & Weights (Open Source): Download the WAN models (1.3B & 14B) here: https://wanxai.com/ Fal.AI: https://fal.ai/ NIM Video: https://nim.video/explore Krea: https://www.krea.ai/ Topaz Labs Project Starlight: Try out the FREE beta of Project Starlight here: https://topazlabs.com/ref/2518/ CHAPTERS 0:00 Intro: New FREE AI Video Model And More! 00:45 WAN 2.1: Open Source & Affordable 01:20 - Features In WAN 01:53- Open Source Models 02:15 - Comparison to SkyReels 02:50 - Text To Video Test on Fal 03:37 - Text to Video Test 2 04:20.- Cost on Fal for Generation 04:33 - Comparison to Veo-2 04:45 - Stylistic Outputs 05:13 - Nim's Free Upscaler 05:50 - Image to Video Test 06:07 - Sidequest to Topaz Labs 06:40 - Project Starlight is Free 06:53- Startlight Test 07:41- Image To Video Test Continued 08:00 - Image to Video Test Again 08:23 - Wan on Krea 08:51- Image to Video On Krea 09:34 - Community Outputs 10:42 - Audio to Video on Luma Labs 11:14 - Video To Audio Test 11:36 Having Fun With Audio 11:58 - Music? 12:05 - Creepy AI Ghosts and Dolls 12:33 - Wrap Up Are you excited about free and open-source AI video models like WAN? What other AI video tools should we test? 👍 Like this video if you learned something new! 🔔 Subscribe for more AI video explorations, reviews, and tutorials! #AIVideo #VideoGenerator #WANModel #OpenSourceAI #TopazLabs #ProjectStarlight #VideoUpscaling #LumaLabs #DreamMachine #VideoToAudio #ArtificialIntelligence #FreeAI #TechReview #Tutorial #CreativeAI #FallThatI #NiMH #KoreaAI

Veo-2 Image To Video: Is It Worth The Price?

25 Febrero 2025 at 21:50

💾

Is Google's new Ve0-2 video generator, dubbed the "Midjourney of AI video," really worth the hype (and the hefty price tag)? We dive deep into the full, Big Daddy version of Veo-2 as it breaks free from YouTube Shorts and hits platforms like Freepik and Fal.AI. But how does it stack up against Minimax's New Image To Video Director Mode and its updated image-to-video capabilities? In this video, we put Ve0-2 and Director Mode to the test, exploring: Image-to-Video Quality: See side-by-side comparisons and witness the stunning (and sometimes hilarious!) results. From dragons over "not Winterfell" to 80s espionage gone wrong, we examine the visual fidelity and creative interpretations of these AI models. Camera Control & Styles: Minimax's new camera controls are a game-changer! We test panning, tilting, tracking shots, zooms, and even camera shake to see how much cinematic control you really have. Plus, can these AI tools maintain consistent styles? Cost Analysis: Ouch! Veo-2's pricing is... premium. We break down the cost per second and compare it to alternatives like Kling and Luma. Is Veo-2 worth the investment, or are there more budget-friendly options for your AI video needs? Community Outputs: We showcase impressive examples from the AI video community to inspire your own creations. Whether you're a seasoned AI artist or just curious about the future of video generation, this video is your essential guide to Google Veo-2 and Minimax Director Mode Mentioned Links: Free Pick Deep Dive: : https://youtu.be/_4YpmdyWors FAL.AI: https://fal.ai/ 00:00 Intro 00:38 First Look at Vo2 Image-to-Video 01:20 - The Big Veo-2 Model Is Released 01:38 - Does Veo2 Handle Text and Image differently? 01:55 - Text and Image Shoot Out in Veo-2 03:02 Veo- 2 Image to Video Test One 03:27 - Veo-2 Image To Video Style Consistency 04:09 - Problems with Veo-2 Image To Video 04:50 - Impressive Veo-2 Output 05:38 - Veo 2 Image To Video Test 06:22 - Veo-2 Aliens Invade Philly 06:53- Veo-2 Cold War Movie 07:33 - Veo-2 vs Sora 08:13 - Community Outputs 09:03 Veo-2 Image to Video: Is it Good? 09:20 - How Much Is It? 09:56 Minimax Director Mode Enters the Ring 09:55 Director Bo: Camera Control Deep Dive 10:26 Minimax Camera Control Tests & Examples 11:11 Multi Prompting 11:40 - Minimax gets funny 11:58 Minimax Style & Consistency Strengths 12:28 Minimax Limitations & Text-to-Video 13:16 The Jaws "Zolly" Shot 13:34 - Outro

AI Video Games, NEW (Open) Video Model-- and MORE!

20 Febrero 2025 at 22:56

💾

AI Games, A New Video Model & More! Grab Hubspot's AI Toolkit here: https://clickhubspot.com/6mbe Today I am diving DEEP into the wild world of AI and video games... and beyond! Microsoft Research just dropped a bomb with their new AI "game creator" (it's more complicated than it sounds!), and I'm breaking down everything you need to know. PLUS, I'm looking at ByteDance's "Phantom," a new AI tool that turns single images into videos – and what it might mean for the future of TikTok. Finally, I'm taking a look at the overhauled Kyber AI, now Super Studio. We're talking AI-powered game development, video generation, and even the potential to resurrect classic games! Get ready to have your mind blown! 🤯 🔥 What I Cover in This Video: Microsoft's "Muse": Is it REALLY an AI game creator? I'm exploring the tech (WAM – World and Human Action Model) behind it, how it was trained using actual gameplay from Bleeding Edge (thanks, Ninja Theory!), and the implications for game developers. I'll even touch on Phil Spencer's idea of using AI for game preservation... Skyrim on your iPhone 32, anyone? 😉 Alibaba's "Wangx 2.1": A new open source video AI. ByteDance's "Phantom": This one-shot image-to-video generator looks simple at first, but I think there's a bigger plan here, especially for TikTok creators. I'm dissecting the demo videos and speculating on how this could change short-form video content. Kyber's Super Studio: Going from spaghetti to AI super powers. See what this thing can do! BONUS: I'm highlighting a FREE AI marketing toolkit from HubSpot that's packed with actionable strategies, prompt templates, and tools to level up your business. (Link below!) 🔑 Keywords: AI, Artificial Intelligence, Video Games, Game Development, Microsoft Research, Muse, WAM, Bleeding Edge, Ninja Theory, Nvidia H100, AI Training, Game Preservation, Phil Spencer, ByteDance, Phantom, Image to Video, TikTok, Short Form Video, AI Video Generation, AI Tools, HubSpot, AI Marketing, Kyber, Super Studio, AI Art, Video Restyle. ⬇️ FREE AI Marketing Toolkit from HubSpot: https://clickhubspot.com/6mbe LINKS: Microsoft's MUSE: https://www.microsoft.com/en-us/research/blog/introducing-muse-our-first-generative-ai-model-designed-for-gameplay-ideation/ Kabier.AI: https://play.superstudio.app/timpro 👍 Like this video? Smash that like button, subscribe, and hit the notification bell for more AI news and deep dives! Chapters 00:00 - Intro: AI Game Changers! 00:32 - Microsoft's Muse: AI Meets Gaming 01:33 - The Tech Behind Muse: WHAM Explained 01:50 - Training Muse with Bleeding Edge 02:11 - The Hardware 02:32 - Overcoming AI Generation Issues 03:08 - Muse as a Developer Tool 03:43 - The Future of Muse: Game Preservation? 04:26 - Alibaba's new open source model 05:49 - Hubspot's AI Marketing Toolkit 07:10 - Kaiber Super Studio: Deep Dive 07:45 - The New Features 08:23 - Kyber: Weird and Artistic Examples 08:57 - Weirder with the New Model 09:14 - Kaiber: Audio Tools Tour 10:09 - Kaiber: Practical Workflow Example 10:55 -Kaiber's Secret Superpower 11:37- Video with Minimax 11:55- Output to Video Restyle 12:20 - Final Video 12:45 -Kaiber is Still Cooking 13:13 - ByteDance's Phantom: TikTok's Next Move? 13:58 - Phantom and the Future of Short-Form Video

FreePik: When an AI Platform Has All The Things

19 Febrero 2025 at 21:49

💾

Today I am diving deep into Freepik, one of the rising stars in the all-in-one AI platform world. Is it worth the hype? Absolutely! I'm doing a full platform tour and showing you exactly why Freepik is a game-changer, whether you're a seasoned AI pro or just starting your creative journey. I use Freepik, and it has earned a place in my workflow. I'm partnering with them for this video, but as always, I'm giving you the straight goods – no hard sell, just real insights. Freepik's core is its powerful suite of AI generation tools, covering everything from stunning images to dynamic videos. But it's so much more than just that! I'll show you how it stands out from the crowd, especially if you're looking to move beyond platforms like Midjourney. I'm putting Freepik's AI models to the test: Mystic (their own unique model), Google's Image Gen 3 (known for incredible prompt coherence), and Flux. We'll do a head-to-head comparison with a challenging prompt to see which model reigns supreme. Plus, I'll reveal one of Freepik's most exciting new features: integration with Magnific AI for mind-blowing creative upscaling! I also show the editor that allows modifications! But that's not all! I'll also explore: Image Editing: Built-in tools that rival a mini-Lightroom. Outpainting: Easily change aspect ratios and expand your images. Style Presets: A HUGE library of styles to transform your generations. Custom Model Training: Create your own unique style and even train a character model (yes, I trained another AI version of myself!). Video Generation: Access to ALL the major AI video models (MiniMax, Kling, Runway, Luma's Dream Machine, and more!). I will compare outputs, and demonstrate their unique feature. Sound Effects & Lip Sync: Add sound effects directly to your videos and even utilize lip-syncing capabilities. Design: A library of options for pring. Vector: Options for illistration. Templates: Many premade templates for a variety of applications. I'll also break down Freepik's pricing tiers (including the free option!) and show you how it stacks up against the competition. Hop On Freepik here: https://www.freepik.com/pikaso/ai-image-generator?utm_source=youtube&utm_medium=paid-ads&utm_campaign=theoretically. AND: If you use the coupon code THEO you get 50 uses, and can try Freepik Premium for 48 hours! (When you hit "Sign Up" Chapters 00:00 Intro: The Rise of All-in-One AI Platforms 00:48What is Freepik? 01:04 Image Generation Models: Mystic, Image Gen 3, Flux 01:40 Model Shootout: Comparing Outputs 02:44 Image editing tools. 03:06 Magnific AI Upscaling 04:21 Retouching: Editor Mode 05:21 Outpainting and aspect ratio. 05:55 Style Presets 07:09 Training Your Own Style & Character 08:19 Video Generation: All the Major Models 09:19 Minimax vs Kling outputs 09:53 Pro mode on models, auto prompting. 10:20 Sound Effect Generation 10:49 Lip Sync Capabilities 11:53 Beyond AI: Design, Vectors, Templates 12:09 Pricing and Plans

WILD AI Video Model Is Open Source & Platform? Skyreels Does It Right!

18 Febrero 2025 at 22:15

💾

Today I'm diving into SkyReels, a powerful new AI video model that’s free, open-source, and comes with its own robust platform. In this deep dive, we’ll walk through SkyRails’ unique features—from its human-centric training data to its text-to-video and image-to-video workflows. You’ll see real examples of prompt coherence, scene generation, camera movement, and the innovative “AI Drama” tool for episodic storytelling. We also discuss the model’s performance benchmarks on an RTX 4090, how its platform pricing compares, and where Skyreels might head next. Whether you’re an AI enthusiast, filmmaker, or just curious about the latest in generative video, this is the place to start! If you find this video helpful, feel free to like, comment, and subscribe for more AI-driven content. Thanks for watching! Key Topics Covered in This Video Open-source release strategy and GitHub details Text-to-video and image-to-video generation demos Human-centric expression and prompt coherence Built-in image generator and video editor “AI Drama” episodic storytelling tool Model performance (benchmarks & resolution) Platform pricing and credit structure Future developments in AI animation LINKS: Skyreels Platform: http://www.skyreels.ai/home?utm_source=kol&utm_campaign=id_TheoreticallyTim Skyreels Open Source Model: https://github.com/SkyworkAI/SkyReels-V1 Skyreels A1 Open Source Model: https://github.com/SkyworkAI/SkyReels-V1?tab=readme-ov-file CHAPTERS 00:00 – Introduction & Model Overview 00:19 – Open Source AI Video & Platform Approach 01:07 – Skyreel Model Specs & Performance 01:50 - Timing on Running Locally 02:03 - The Skyreels Platform 02:27 - Is that Me? 03:16 - Interesting Notes on the Video Models 2:50 - Some Additional Features on the Skyreel Platform 04:04 – Text-to-Video Demonstrations 05:01 – Image-to-Video Showcases 05:49 - Image to Video Test 2 06:38 - Image to Video, This was Insane 07:14 = Image To Video - Multi Character 07:58 - Camera Moves 08:37 - Storyboard Feature in Skyreels 09:15 - Image To Video - No Prompt 09:47 - Style Consistency: Anime and Animation 10:14 – AI Drama & Episode Generation 11:05- Pose Control! 11:59– Lip Sync & Expression Model 12:23 - Driving Video to Character Animation 13:06– Pricing & Credit Options

The BEST AI Video Generator Is OUT & FREE! (with some catches...)

13 Febrero 2025 at 21:57

💾

Google's Stunning AI Video Generator, Veo-2 has finally been released and it's FREE! Although, there are a few interesting catches! For one, Veo-2 is available via the Youtube App, and is probably more aimed at Youtube Shorts. That said, I do have a solution capturing outputs so you can edit the videos on your own. Also, is this a Turbo Model? I'll take a shootout between the Web Veo2 model and the Youtube version to see. Spoilers: It likely is. LINK: Star Wars: The Ghost's Apprentice: https://youtu.be/KWlxMC0j498?si=d643alvIrhrQVZI8 Chapters: 0:00 - Intro 0:35 - Veo-2 Has Released! 1:09 - Youtube Shorts? 1:18 - Getting Started With Veo-2 1:52 - Imagen-3 2:22 - First Shot 2:53- Some Limitations 3:20 - The Timeline 3:56 - Adding Music 4:19 - Uploading 4:37 - I don't like this 4:51- Test Short 5:04 - Is this a Turbo Version? 5:20 Testing the same Prompt 5:56 - How Much Faster Is It? 6:05 - Moderation Problems 7:02 - Animated Presets 7:24.- Is it Perfect?

Luma Is The BEST AI Image To Video Now? Plus: Pika & Google Updates!

11 Febrero 2025 at 23:30

💾

Today we're diving into Luma Labs Dream Machines and their updated Ray2 model that’s pushing the boundaries of image-to-video technology. In This Video: • AI Image-to-Video Revolution: Discover how Luma’s Ray-2 model is setting the bar for image-to-video conversion—with stunning visuals and unexpected “happy accidents” that spark creativity. • Google’s Imagen 3: See how Google is upping its game with new inpainting capabilities, making it one of the best free AI image generators available today. • VFX Compositing Game Changer: Get a sneak peek at an upcoming model poised to transform VFX compositing, blending video elements seamlessly like never before. • Creative Process Insights: Follow the demo featuring Midjourney-inspired visuals, detailed project boards, and camera control tips to achieve that Michael Bay cinematic style. • Community Showcase: Enjoy a curated selection of impressive community outputs—from cyberpunk anime scenes to dynamic video compositing examples. LINKS: Luma Labs: https://lumalabsai.partnerlinks.io/dd1jzuzx6o87 Imagen3: https://deepmind.google/technologies/imagen-3/ Pika: https://pika.art/ Snap Research: https://x.com/AbermanKfir/status/1888987447763292305 CHAPTERS: 0:00 - Intro 0:37 - Luma Labs Image to Video Update 1:04 - Demo Video 1:27 - Luma Platform Walkthrough 2:00 - Midjourney Bayham 2:28 - First Generation with Luma's Image To Video 3:00 - More Generations 3:15 - Limitations and More Like 3:47 - Wonky Shots 4:04 - Camera Controls 4:52 - Motion and Outputs 5:30 - Community Outputs 6:00 - Animation 6:44 - Lunch on a Skyscraper 7:17- Imagen 3 Updates 8:21 - Inpainting 9:17 - DallE-4 incoming? 10:00 - Pika additions 10:31 - Another Example 11:01 - Jon Finger's impressive tests 12:15 - Snap Research - AI VFX!
❌
❌