Let’s connect on LinkedIn: LINKEDIN

Sign up to SECONDECHO

AI LIBRARY • REFERENCE LISTS

Large Language Models

(If you are just beginning using LLMs at work, this Hubspot Guide to Claude AI for Enterprise is a great place to start) Download

Openai Chatgpt

Deepseek

Anthropic Claude

Groq Inference Engine (Delivering Open-Source Models with extremely low latency)

Google Gemini

Grok

MY AI ART WORKFLOW

Here is the breakdown of the process of making my first full AI music video for Run to the Sea.

The song itself, I made it with my human friends.

The concept for the video was to use a couple of my solo album covers as inspiration (Little Songs, and Morning Orbit) and make a video featuring the “younger” version of me from that era.

The 2 new developments happened this month that I’ve been waiting for, before giving a full video a try: Runway’s Act One for lipsync, and Openart making it simple to fine-tune a Flux model to create consistent characters.

Obviously, the video is not perfect but the tech is so new, and as an experiment, I think it points to what will be possible.

So here is the workflow I used, a lot of trial and error, things I learned and would do differently next time.

First I used OpenArt to fine-tune a Flux model on myself. Openart is a platform that has a ton of different tools and a great place to experiment with different Gen AI. I uploaded about 20 images of me and trained the model.

For image generation I also used Openart, I loaded the trained Flux model of me and started the prompt with “Younger David Usher…” I didn’t use negative prompting, and I would start with 1 generation to test the prompt, and then when I got an image that was on the right track I moved to 4 at a time.

The images were good but low res, so I used an Openart upscaler. (if I were to do this again I would probably use Magnific or Topaz Upscale to upscale so I would have more resolution going into the video stage)

For image to video, I used either Kling or Runway. When I wanted just motion I would use Kling, great interface, and I like how you can extend the clips.

For lipsync, I used Runway, their new Act One feature very cool. With Act One you need to load an image as the source, but also a “driving video” that shows the AI the mouth and head placement for the lipsync. I went pretty lo-fi on this and shot myself against the hotel bathroom shower curtain with my iPhone. (I was on the road for some keynote gigs, so lots of hotels). I think if I actually spent more time on the driving videos, the lipsync would be better.

I edited it on my MacBook with iMovie.

The total cost was probably around $200 in generation credits. (For one of my videos in the 90's we spent over $250K)

And in time I’m guessing, over a few nights I put in about 8 hours.

If you have any follow-up questions just put them in the LinkedIn comments and I'll try to answer.

I made the song with my friends Jonathan Gallivan, Albert Chambers and Vincent Vertefeuille.

AI NO-CODE PROGRAMMING

I have tried all these AI application/website builder and Lovable is by far the best for people that don’t code.

Lovable

Bolt

Replit

Cursor

Claude Code

Windsurfer

AGENT WORKFLOW

TOOLS

Perplexity Search Engine

Firecrawl Websites to LLM ready data

Manus Agentic Agent (this could be in the Agentic Workflow section as well)

Cassidy Workplace Platform

Ada Customer Service