Skip to content

Instantly share code, notes, and snippets.

View ljleb's full-sized avatar

ljleb

View GitHub Profile
@ljleb
ljleb / logs.txt
Created February 10, 2025 19:49
SDXL Attention stats logs
Image saved as 'generated_image.png'.
Attention statistics per diffusion timestep:
Timestep 1.0: 140 attention activations recorded.
Module down_blocks.1.attentions.0.transformer_blocks.0.attn1 (self attention): mean=0.0002, std=0.0027, min=0.0000, max=0.9492
Module down_blocks.1.attentions.0.transformer_blocks.0.attn2 (cross attention): mean=0.0130, std=0.0278, min=0.0000, max=0.8486
Module down_blocks.1.attentions.0.transformer_blocks.1.attn1 (self attention): mean=0.0002, std=0.0021, min=0.0000, max=0.9780
Module down_blocks.1.attentions.0.transformer_blocks.1.attn2 (cross attention): mean=0.0130, std=0.0351, min=0.0000, max=1.0000
Module down_blocks.1.attentions.1.transformer_blocks.0.attn1 (self attention): mean=0.0002, std=0.0026, min=0.0000, max=0.9380
Module down_blocks.1.attentions.1.transformer_blocks.0.attn2 (cross attention): mean=0.0130, std=0.0320, min=0.0000, max=0.6533
@ljleb
ljleb / sdxl_attention_stats.py
Last active February 10, 2025 19:48
SDXL Attention stats
import math
import torch
import torch.nn.functional as F
from typing import Optional
from diffusers import StableDiffusionXLPipeline
from diffusers.models.attention import Attention, BasicTransformerBlock
# ------------------------------------------------------------------------------
# 1. StatsCollector and CustomAttnProcessor2_0 definition
import torch
import math
torch.manual_seed(0)
device = torch.device("cuda:0")
dtype = torch.float64
{
"last_node_id": 327,
"last_link_id": 589,
"nodes": [
{
"id": 323,
"type": "PreviewImage",
"pos": [
864,
389