Skip to content

Instantly share code, notes, and snippets.

@JupyterJones
Created April 6, 2026 16:55
Show Gist options
  • Select an option

  • Save JupyterJones/ac62febbc772d30f684158ea40037ce9 to your computer and use it in GitHub Desktop.

Select an option

Save JupyterJones/ac62febbc772d30f684158ea40037ce9 to your computer and use it in GitHub Desktop.
An advanced Flask based Comfy graphic interface employing Ollama to enhance image prompts Creates videos

AI Director

AI Director is a real-time, interactive image-to-image (and text-to-image) sequence generator powered by ComfyUI. It allows you to "direct" an evolving AI-generated video by providing a base story and injecting live directions while the rendering is in progress.

Features

  • Interactive Control Room: A web-based UI to manage your AI film production.
  • Story-Driven Generation: Enter a long-form story to generate a sequence of frames.
  • Live Prompt Injection: Inject directions on-the-fly (e.g., "now it starts raining", "add a cinematic lens flare") without stopping the render.
  • Visual Feedback:
    • Progress Bar: Real-time green progress bar showing frame completion.
    • Injection Status: The live prompt box changes color to Yellow when a prompt is pending and Green once it has been accepted and is active in the current frame.
  • Parameter Control: Adjust Model, VAE, LoRAs (up to 3), Seed, FPS, Frame Count, Denoise Strength, CFG, and Steps directly from the UI.
  • Automatic Video Assembly: Automatically compiles generated frames into an MP4 video using FFmpeg.

Prerequisites

  • Python 3.9+
  • ComfyUI Server: Must be running and accessible (default: http://192.168.1.41:5000).
  • FFmpeg: Required for video encoding.
  • Python Dependencies:
    pip install flask requests icecream

Setup & Configuration

  1. Assets: Ensure your ComfyUI server has the models and LoRAs listed in the dropdowns (or edit comfydirector.py to match your local inventory).
  2. Server URL: Open comfydirector.py and update COMFY_URL to point to your ComfyUI instance.
  3. Output Directory: Frames are saved to ./workflow_frames.

How to Use

  1. Start the Application:
    python comfydirector.py
  2. Open the UI: Navigate to http://localhost:5001 in your browser.
  3. Configure Parameters:
    • Select your Model and VAE.
    • Choose up to 3 LoRAs (set to "None" if not needed).
    • Adjust Frames, FPS, and Denoise levels.
    • Click Update Params to save these settings.
  4. Run the Story:
    • Type your main narrative in the "AI Director Control Room" text area.
    • Click RUN.
  5. Live Directing:
    • While the video is rendering, type into the Inject Live Prompt box.
    • Click Inject.
    • The box will turn Yellow (Pending).
    • Once the next frame starts using your new prompt, the box turns Green (Active).
  6. Final Output: Once finished, a workflow.mp4 file will be generated in the root directory.

File Structure

  • comfydirector.py: The main Flask application and render engine.
  • workflow_frames/: Directory where individual PNG frames are stored.
  • story.txt: A log of all prompts used for each frame.
  • workflow.mp4: The final rendered video.
import os
import time
import requests
import subprocess
from flask import Flask, render_template_string, request, jsonify, send_from_directory
from threading import Thread
from icecream import ic
import json # ==============================
# CONFIG
# ==============================
COMFY_URL = "http://192.168.1.41:5000"
OLLAMA_URL = "http://localhost:11434/api/generate"
OUTPUT_DIR = os.path.abspath("./workflow_frames")
VIDEO_FILE = "workflow.mp4"
DEFAULT_VAE = "vae-ft-mse-840000-ema-pruned.safetensors"
DEFAULT_LORA1 = "Terror Tales.safetensors"
DEFAULT_LORA2 = "None"
#DEFAULT_LORA3 = "detailed style SD1.5.safetensors" maybe the kids
DEFAULT_LORA3 = "more_details.safetensors"
DEFAULT_MODEL = "dreamshaper_8.safetensors"
MODELS = ["aiREalistic_aiRT.safetensors", "aiREalistic_warmrAIN.safetensors", "anithing_v30Pruned.safetensors", "dream2reality_v10.safetensors", "dreamshaper_8.safetensors", "influencer_v10.safetensors", "realisticVisionV60B1_v30VAE.safetensors", "ultra_v3.safetensors", "v1-5-pruned-emaonly.safetensors"]
VAES = ["vae-ft-mse-840000-ema-pruned.safetensors", "kl-f8-anime2.safetensors"]
LORAS = ["None", "CW_02_V2_NP1_ill.safetensors", "FLUX_curlg1ng3r_LoRA.safetensors", "FluxDFaeTasticDetails.safetensors", "JillRE3.safetensors", "Low_Poly_Art.safetensors", "MoroccoZIT.safetensors", "ParCInSt2.safetensors", "ParMaShi.safetensors", "QHAF01C2V798D69TARNPG86Y70.safetensors", "SANDRA_Realistic_face_v.1.safetensors", "SW_PoecticV5preview3_zit.safetensors", "SW_PoeticV5preview3_sd.safetensors", "ScotlandZIT.safetensors", "Terror Tales.safetensors", "UkraineZIT.safetensors", "[LoRA][Horror]s4w3d0ffBlend_v10.safetensors", "[LoRA][Photo]s4w3d0ffBlend_v21.safetensors", "alicelora-10.safetensors", "detailed style SD1.5.safetensors", "doa_monica-v2.safetensors", "klein_instagramreality_v2.safetensors", "more_details.safetensors", "perfection style SD1.5.safetensors", "skin tone style zib v1.1.safetensors", "skin_tone_slider_v1.safetensors", "ultra_real_v3.safetensors", "face_only_01.safetensors"]
FPS_GLOBAL = 5
FRAMES_PER_PARAGRAPH = 15
DENOISE_GLOBAL = 0.45
DEFAULT_SEED = 245487692
DEFAULT_CFG = 8
DEFAULT_STEPS = 30
os.makedirs(OUTPUT_DIR, exist_ok=True)
app = Flask(__name__)
# ==============================
# GLOBAL STATE
# ==============================
current_story = ""
injection_lines = []
MAX_LINES = 6
last_server_filename = None
running = False
current_frame = 0
history_prompts = []
current_seed = DEFAULT_SEED
vae_name = DEFAULT_VAE
lora1_name = DEFAULT_LORA1
lora2_name = DEFAULT_LORA2
lora3_name = DEFAULT_LORA3
model_name = DEFAULT_MODEL
fps_current = FPS_GLOBAL
frames_per_paragraph = FRAMES_PER_PARAGRAPH
denoise_current = DENOISE_GLOBAL
cfg_current = DEFAULT_CFG
steps_current = DEFAULT_STEPS
negative_prompt = "black, dark, blurry, low quality, distorted, text, watermark, nude, out of frame"
injection_status = "idle"
last_injected_prompt_content = ""
def logit(logdata):
with open("logs/mylog2.txt", "a") as input_file:
input_file.write(logdata + "\n")
print("logs/mylog2.txt entry: ", logdata)
def readLog1():
#return the entire log file
with open("logs/mylog2.txt", "r") as input_file:
for line in input_file:
print(line)
#return the last 5 lines of the log file
LINES = []
with open("logs/mylog2.txt", "r") as input_file:
lines = input_file.readlines()
last_lines = lines[-5:]
for line in last_lines:
print(line)
LINES.append(line)
return LINES
input_file.close()
def readLog2():
#return the entire log file
with open("logs/mylog2.txt", "r") as input_file2:
for line2 in input_file2:
print(line2)
#return the last 5 lines of the log file
LINES2 = []
with open("logs/mylog2.txt", "r") as input_file2:
lines2 = input_file2.readlines()
last_lines2 = lines2[-5:]
for line2 in last_lines2:
print(line2)
LINES2.append(line2)
return LINES2
input_file.close()
# ==============================
# ASSETS
# ==============================
MODELS = ["aiREalistic_aiRT.safetensors","aiREalistic_warmrAIN.safetensors","anithing_v30Pruned.safetensors","dream2reality_v10.safetensors","dreamshaper_8.safetensors","influencer_v10.safetensors","realisticVisionV60B1_v30VAE.safetensors","ultra_v3.safetensors","v1-5-pruned-emaonly.safetensors"]
VAES = ["vae-ft-mse-840000-ema-pruned.safetensors","kl-f8-anime2.safetensors"]
LORAS = ["None","CW_02_V2_NP1_ill.safetensors", "face_only_01.safetensors" ,"FLUX_curlg1ng3r_LoRA.safetensors","FluxDFaeTasticDetails.safetensors","JillRE3.safetensors","Low_Poly_Art.safetensors","MoroccoZIT.safetensors","ParCInSt2.safetensors","ParMaShi.safetensors","QHAF01C2V798D69TARNPG86Y70.safetensors","SANDRA_Realistic_face_v.1.safetensors","SW_PoecticV5preview3_zit.safetensors","SW_PoeticV5preview3_sd.safetensors","ScotlandZIT.safetensors","Terror Tales.safetensors","UkraineZIT.safetensors","[LoRA][Horror]s4w3d0ffBlend_v10.safetensors","[LoRA][Photo]s4w3d0ffBlend_v21.safetensors","alicelora-10.safetensors","detailed style SD1.5.safetensors","doa_monica-v2.safetensors","klein_instagramreality_v2.safetensors","more_details.safetensors","perfection style SD1.5.safetensors","skin tone style zib v1.1.safetensors","skin_tone_slider_v1.safetensors","ultra_real_v3.safetensors"]
def generate_options(items, selected):
return "".join(f'<option value="{i}" {"selected" if i == selected else ""}>{i}</option>' for i in items)
# ==============================
# HTML
# ==============================
def get_html():
return f"""
<!doctype html>
<html>
<head>
<title>AI Director</title>
<style>
body {{ font-family:'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
background:linear-gradient(120deg,#0f2027,#203a43,#2c5364);
color:#f0f0f0; margin:0; padding:10px; }}
h2,h3 {{ color:#ffd369; margin-bottom:10px; }}
.container {{ width:90%; display:flex; gap:10px; height:90vh; }}
.column {{ width:90%; flex:1; display:flex; flex-direction:column; overflow-y:auto; padding:10px; border-radius:10px; background-color:rgba(0,0,0,0.5); }}
textarea {{ width:96%; border-radius:8px; padding:10px; font-size:16px; border:1px solid #444; background-color:#1a1a1a; color:#f0f0f0; resize:vertical; margin-bottom:10px; }}
select,input {{ width:86%; border-radius:4px; padding:5px; background-color:#1a1a1a; color:#f0f0f0; border:1px solid #444; margin-bottom:5px; }}
button {{ width:86%; background-color:#ffd369; color:#1a1a1a; font-weight:bold; border:none; border-radius:8px; padding:10px 10px; cursor:pointer; margin-top:5px; transition:0.2s; }}
button:hover {{ width:86%; background-color:#ffb84d; }}
#status {{ width:86%; color:#00ff9f; font-weight:bold; margin-bottom:10px; }}
#prompt {{ width:86%; background-color:rgba(0,0,0,0.6); padding:10px; border-radius:8px; max-height:120px; overflow-y:auto; font-family:monospace; line-height:1.4; margin-bottom:10px; border:1px solid #00ff9f; }}
#frame {{ display:block; margin:10px auto; border-radius:10px; max-width:86%; max-height:50vh; }}
#progress-container {{ width:86%; background-color:#333; border-radius:5px; height:12px; overflow:hidden; }}
#progress-bar {{ width:0%; height:86%; background:linear-gradient(90deg,#00ff9f,#00b36f); transition: width 0.5s ease; }}
#inject.pending {{ width:86%; border-color:#ffd369; }}
#inject.active {{ width:86%; border-color:#00ff9f; }}
</style>
<script>
function injectPrompt() {{
let text = document.getElementById("inject").value;
if (text.trim() === '') return;
let injectEl = document.getElementById("inject");
injectEl.classList.remove("active");
injectEl.classList.add("pending");
fetch("/inject", {{
method: "POST",
headers: {{"Content-Type":"application/json"}},
body: JSON.stringify({{text: text}})
}});
}}
function clearInject() {{ document.getElementById("inject").value=''; }}
function updateParams() {{
const model = document.getElementById("model").value;
const vae = document.getElementById("vae").value;
const lora1 = document.getElementById("lora1").value;
const lora2 = document.getElementById("lora2").value;
const lora3 = document.getElementById("lora3").value;
const neg_prompt = document.getElementById("neg_prompt").value;
const seed = parseInt(document.getElementById("seed").value) || 0;
const fps = parseInt(document.getElementById("fps").value) || 0;
const frames = parseInt(document.getElementById("frames").value) || 0;
const denoise = parseFloat(document.getElementById("denoise").value) || 0.0;
const cfg = parseFloat(document.getElementById("cfg").value) || 0.0;
const steps = parseInt(document.getElementById("steps").value) || 0;
fetch("/update_params", {{
method:"POST",
headers: {{"Content-Type":"application/json"}},
body: JSON.stringify({{model,vae,lora1,lora2,lora3,neg_prompt,seed,fps,frames,denoise,cfg,steps}})
}});
}}
function updateImage() {{
document.getElementById("frame").src = "/latest_frame?" + new Date().getTime();
fetch("/status").then(r => r.json()).then(d => {{
document.getElementById("prompt").innerText = d.prompt;
document.getElementById("status").innerText = d.status;
document.getElementById("seed").value = d.seed;
document.getElementById("fps").value = d.fps_current;
document.getElementById("frames").value = d.frames_current;
document.getElementById("denoise").value = d.denoise_current;
document.getElementById("cfg").value = d.cfg_current;
document.getElementById("steps").value = d.steps_current;
document.getElementById("neg_prompt").value = d.neg_prompt;
let progress = (d.current_frame/d.total_frames)*100;
document.getElementById("progress-bar").style.width = progress + "%";
const injectEl = document.getElementById("inject");
if(d.injection_status === "pending") injectEl.classList.add("pending");
else if(d.injection_status === "active") injectEl.classList.add("active");
else injectEl.classList.remove("pending","active");
}});
}}
setInterval(updateImage, 4000);
function runForm(e) {{
e.preventDefault();
const story = document.getElementById("story").value;
fetch("/run", {{
method:"POST",
headers: {{"Content-Type":"application/x-www-form-urlencoded"}},
body: "story=" + encodeURIComponent(story)
}});
}}
</script>
</head>
<body>
<div class="container">
<div class="column">
<h2>AI Director Control</h2>
<form onsubmit="runForm(event)">
<label>Story</label>
<textarea id="story" rows="10"></textarea>
<label>Negative Prompt</label>
<textarea id="neg_prompt" rows="4"></textarea>
<button type="submit">RUN PRODUCTION</button>
</form>
<h3>Current Scene:</h3>
<div id="prompt">...</div>
</div>
<div class="column">
<h3>Live Feedback</h3>
<div id="progress-container"><div id="progress-bar"></div></div>
<img id="frame" src="" alt="Latest Frame">
<h3>Direction Injection</h3>
<textarea id="inject" rows="4"></textarea>
<button onclick="injectPrompt()">Inject Direction</button>
<button onclick="clearInject()">Clear</button>
</div>
<div class="column">
<h3>Status: <span id="status">Idle</span></h3>
<div style="font-size:0.9em;">
<label>Model</label><select id="model">{generate_options(MODELS, model_name)}</select>
<label>VAE</label><select id="vae">{generate_options(VAES, vae_name)}</select>
<label>LoRA 1</label><select id="lora1">{generate_options(LORAS, lora1_name)}</select>
<label>LoRA 2</label><select id="lora2">{generate_options(LORAS, lora2_name)}</select>
<label>LoRA 3</label><select id="lora3">{generate_options(LORAS, lora3_name)}</select>
<label>Seed</label><input type="number" id="seed" value="{current_seed}">
<label>FPS</label><input type="number" id="fps" value="{fps_current}">
<label>Frames per Paragraph</label><input type="number" id="frames" value="{frames_per_paragraph}">
<label>Denoise</label><input type="number" id="denoise" step="0.01" value="{denoise_current}">
<label>CFG</label><input type="number" id="cfg" step="0.5" value="{cfg_current}">
<label>Steps</label><input type="number" id="steps" value="{steps_current}">
<button onclick="updateParams()" style="width:86%; background-color:#00ff9f; color:#000;">UPDATE PARAMETERS</button>
</div>
</div>
</div>
</body>
</html>
"""
# ==============================
# COMFY HELPERS
# ==============================
def upload_image(local_path):
ic(f"Uploading {local_path}")
fname = os.path.basename(local_path)
with open(local_path, "rb") as f:
files = {"image": (fname, f), "overwrite": "true"}
try:
r = requests.post(f"{COMFY_URL}/upload/image", files=files)
if r.status_code == 200: return r.json().get("name")
ic(f"Upload failed: {r.status_code} {r.text}")
except Exception as e:
ic(f"Upload error: {e}")
return None
def generate_image(workflow):
ic("Sending prompt to ComfyUI")
try:
r = requests.post(f"{COMFY_URL}/prompt", json={"prompt": workflow}, timeout=800)
r.raise_for_status()
pid = r.json()["prompt_id"]
ic(f"Prompt sent, ID: {pid}")
return pid
except Exception as e:
ic(f"Prompt error: {e}")
raise e
def wait_for_image(pid):
ic(f"Waiting for {pid}")
while True:
try:
r = requests.get(f"{COMFY_URL}/history/{pid}", timeout=800)
j = r.json()
if pid in j:
outputs = j[pid]["outputs"]
if "9" in outputs:
ic("Image found")
return outputs["9"]["images"]
time.sleep(1)
except Exception as e:
ic(f"Wait error: {e}")
time.sleep(1)
def download_image(info, path):
ic(f"Downloading {info}")
params = {"filename": info["filename"], "subfolder": info["subfolder"], "type": info["type"]}
r = requests.get(f"{COMFY_URL}/view", params=params, timeout=800)
with open(path, "wb") as f: f.write(r.content)
ic("Download complete")
# ==============================
# OLLAMA ENHANCE
'''
gemma:7b
mxbai-embed-large:latest
text-fixer:latest
mistral:7b-instruct
phi3:latest
mistral:7b
codellama:13b
qwen3:8b
deepseek-r1:1.5b
deepseek-coder:1.3b
LlaVa:latest
nomic-embed-text:latest
llama3.2:3b
'''
# ==============================
def enhance_paragraph(text):
logit(f"Enhancing this original text with Llama3.2: {text}")
payload = {
"model": "llama3.2:3b",
"prompt": f"Convert the following story into a single cinematic AI image prompt. Focus only on visual elements: subject, environment, lighting, composition, and style. Do NOT include story narration, emotions, or abstract concepts. Respond with ONLY one paragraph. this is an text2img prompt Text:\n{text}",
"temperature": 0.7,
"top_p": 0.9,
"stream": False # 🔥 CRITICAL FIX
}
try:
r = requests.post(
OLLAMA_URL,
json=payload,
timeout=800 # 🔥 YOUR RULE
)
ic("Status Code:", r.status_code)
data = r.json()
logit(f"Full JSON response: {data}")
result = data.get("response", "").strip()
logit(f"Final enhanced text:, {result}")
if not result:
ic("Empty response, returning original text")
return text
return result
except Exception as e:
ic(f"Ollama enhance error: {e}")
try:
logit("Raw response:", r.text)
except:
pass
return text
# ==============================
# WORKFLOW BUILDER
# ==============================
def get_workflow(seed, prompt_text, server_filename=None):
wf = {"10":{"inputs":{"ckpt_name":model_name},"class_type":"CheckpointLoaderSimple"},
"20":{"inputs":{"vae_name":vae_name},"class_type":"VAELoader"}}
last_model,last_clip=["10",0],["10",1]
if lora1_name and lora1_name!="None": wf["12"]={"inputs":{"lora_name":lora1_name,"strength_model":0.8,"strength_clip":0.8,"model":last_model,"clip":last_clip},"class_type":"LoraLoader"}; last_model,last_clip=["12",0],["12",1]
if lora2_name and lora2_name!="None": wf["14"]={"inputs":{"lora_name":lora2_name,"strength_model":0.6,"strength_clip":0.6,"model":last_model,"clip":last_clip},"class_type":"LoraLoader"}; last_model,last_clip=["14",0],["14",1]
if lora3_name and lora3_name!="None": wf["15"]={"inputs":{"lora_name":lora3_name,"strength_model":0.5,"strength_clip":0.5,"model":last_model,"clip":last_clip},"class_type":"LoraLoader"}; last_model,last_clip=["15",0],["15",1]
wf["6"]={"inputs":{"text":prompt_text,"clip":last_clip},"class_type":"CLIPTextEncode"}
wf["7"]={"inputs":{"text":negative_prompt,"clip":last_clip},"class_type":"CLIPTextEncode"}
if server_filename:
wf["11"]={"inputs":{"image":server_filename},"class_type":"LoadImage"}
wf["21"]={"inputs":{"pixels":["11",0],"vae":["20",0]},"class_type":"VAEEncode"}
latent_input,denoise=["21",0],denoise_current
else:
wf["5"]={"inputs":{"width":340,"height":512,"batch_size":1},"class_type":"EmptyLatentImage"}
latent_input,denoise=["5",0],1.0
wf["3"]={"inputs":{"seed":seed,"steps":steps_current,"cfg":cfg_current,"sampler_name":"euler","scheduler":"normal","denoise":denoise,"model":last_model,"positive":["6",0],"negative":["7",0],"latent_image":latent_input},"class_type":"KSampler"}
wf["8"]={"inputs":{"samples":["3",0],"vae":["20",0]},"class_type":"VAEDecode"}
wf["9"]={"inputs":{"filename_prefix":"frame","images":["8",0]},"class_type":"SaveImage"}
return wf
# ==============================
# RENDER LOOP
# ==============================
def render_video():
global last_server_filename,running,current_frame,history_prompts,injection_status
ic("Starting render video")
running=True
current_frame=0
history_prompts=[]
paragraphs=[p.strip() for p in current_story.split("\n") if p.strip()]
total_frames=len(paragraphs)*frames_per_paragraph
ic(f"Total paragraphs: {len(paragraphs)}, Total frames: {total_frames}")
for para in paragraphs:
prompt_text = enhance_paragraph(para)
if injection_lines:
prompt_text += " " + " ".join(injection_lines)
injection_lines.clear()
injection_status="active"
else:
injection_status="idle"
history_prompts.append(prompt_text)
for f in range(frames_per_paragraph):
seed_use = current_seed + current_frame
wf = get_workflow(seed_use, prompt_text, last_server_filename)
pid = generate_image(wf)
images = wait_for_image(pid)
frame_path = os.path.join(OUTPUT_DIR,f"frame_{current_frame:04d}.png")
download_image(images[0], frame_path)
last_server_filename = upload_image(frame_path)
current_frame+=1
ic("All frames generated, starting ffmpeg")
output_path=os.path.join(OUTPUT_DIR,VIDEO_FILE)
cmd = f'ffmpeg -y -framerate {fps_current} -i {OUTPUT_DIR}/frame_%04d.png -c:v libx264 -pix_fmt yuv420p "{output_path}"'
subprocess.run(cmd,shell=True)
running=False
logit("Video complete")
# ==============================
# FLASK ROUTES
# ==============================
@app.route("/")
def index(): return get_html()
@app.route("/status")
def status():
return jsonify({
"status":"Rendering" if running else "Idle",
"prompt":history_prompts[-1] if history_prompts else "",
"current_frame":current_frame,
"total_frames":len(current_story.split("\n"))*frames_per_paragraph if current_story else 0,
"seed":current_seed,
"fps_current":fps_current,
"frames_current":frames_per_paragraph,
"denoise_current":denoise_current,
"cfg_current":cfg_current,
"steps_current":steps_current,
"neg_prompt":negative_prompt,
"injection_status":injection_status
})
@app.route("/latest_frame")
def latest_frame():
files=sorted([f for f in os.listdir(OUTPUT_DIR) if f.endswith(".png")])
if files:
return send_from_directory(OUTPUT_DIR, files[-1])
return "No frames yet", 404
@app.route("/inject",methods=["POST"])
def inject():
global injection_lines,last_injected_prompt_content,injection_status
data = request.get_json()
text = data.get("text","").strip()
if text:
injection_lines.append(text)
last_injected_prompt_content=text
injection_status="pending"
return "ok"
@app.route("/update_params",methods=["POST"])
def update_params():
global model_name,vae_name,lora1_name,lora2_name,lora3_name,negative_prompt,current_seed,fps_current,frames_per_paragraph,denoise_current,cfg_current,steps_current
data=request.get_json()
model_name=data.get("model",model_name)
vae_name=data.get("vae",vae_name)
lora1_name=data.get("lora1",lora1_name)
lora2_name=data.get("lora2",lora2_name)
lora3_name=data.get("lora3",lora3_name)
negative_prompt=data.get("neg_prompt",negative_prompt)
current_seed=data.get("seed",current_seed)
fps_current=data.get("fps",fps_current)
frames_per_paragraph=data.get("frames",frames_per_paragraph)
denoise_current=data.get("denoise",denoise_current)
cfg_current=data.get("cfg",cfg_current)
steps_current=data.get("steps",steps_current)
ic(f"Updated params: {data}")
return "ok"
@app.route("/run",methods=["POST"])
def run():
global current_story
current_story=request.form.get("story","").strip()
if not current_story:
return "No story provided"
Thread(target=render_video,daemon=True).start()
return "Rendering started"
# ==============================
# MAIN
# ==============================
if __name__=="__main__":
app.run(host="0.0.0.0",port=5002,debug=False)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment