article cover
Aymane BENBACER

Aymane BENBACER

Full Stack Data Scientist / Co-founder

Creating stunning logos with Stable Diffusion: A guide to text-to-image generation

October-26-2024

1. Explaining Stable Diffusion

What is Stable Diffusion?

Stable Diffusion is an open-source machine learning model that generates images from text descriptions, following the "text-to-image" format. It was developed by Stability AI and has become popular because it can produce high-quality, complex images from simple text prompts, thanks to its diffusion model architecture. It’s primarily used for applications like image generation, inpainting (filling in missing parts of images), and style transfer.

How does Stable Diffusion work?

Stable Diffusion is based on a technique known as diffusion models, which involves a two-step process: adding noise to data and then learning to reverse this process to generate clear, structured images from random noise. Here's a step-by-step explanation:

  1. Noise addition (Forward process): In the first phase, a clean image is gradually corrupted by adding random noise in small steps. Imagine taking a photo and adding tiny amounts of noise repeatedly until it looks like pure static.
  2. Denoising (Reverse process): During training, the model learns the reverse process of recovering the original image step by step from this noise. Essentially, it "undoes" the noise in reverse to reconstruct the image from an abstracted, noisy state. This learned denoising process is what ultimately enables it to generate new images from scratch.

In Stable Diffusion’s case, the model is trained to start from pure noise and then iteratively “denoise” it using a set of learned patterns, guided by the information from a text prompt, to produce coherent, high-quality images that match the description.

What is a latent text-to-image diffusion model?

Stable Diffusion is specifically a latent text-to-image diffusion model. Breaking down what this means:

  1. Latent space: Latent space is a lower-dimensional representation of high-dimensional data. In image generation, it captures core patterns and structures without working directly with the raw, pixel-level details of each image. Stable Diffusion doesn’t generate images directly on pixels; instead, it generates them in a compressed, “latent” space and then decodes this representation into a full image.
  2. Text-to-image: The model translates text (from prompts) to images by learning associations between textual descriptions and image representations in its training data.
  3. Diffusion: The model’s core technique is a diffusion process that gradually improves an image over several steps, starting from random noise. It does this by conditioning each denoising step on the text input.

In simpler terms, the latent text-to-image diffusion model uses a two-step approach:

  • It takes the text and maps it into a latent space, where it learns the essential characteristics of what an image could look like based on that description.
  • Then it applies its denoising technique in this latent space, decoding back to an image that matches the textual prompt. This results in a detailed, high-quality image while reducing the computational load.

This approach is powerful because it combines the expressive power of text and efficient image generation, which enables Stable Diffusion to create diverse and complex images efficiently.

2. Choosing a real-world application: Custom brand logo generation

Creating a logo that resonates with a specific audience can be costly and time-consuming. Generative AI allows us to automate the creative process, adjusting the style to fit branding guidelines. In this example, we’ll use Stable Diffusion to design a logo that aligns with a specified aesthetic, using real-world brand logos as training data to guide its design.

3. Setting up your Environment

Stable Diffusion can be fine-tuned on any GPU-equipped system. This example assumes access to Python, PyTorch, and a suitable environment like Google Colab or local machine with GPU support.

Required Packages

  • transformers for Stable Diffusion pipeline setup
  • diffusers for model handling
  • torch for tensor operations and fine-tuning
  • datasets for data handling
  • accelerate
  • torchvision
  • peft

Install the libraries using:

!pip install torch transformers diffusers datasets accelerate torchvision peft


4. Collecting real data

To fine-tune Stable Diffusion, we need a dataset of logo images and brand descriptors. For simplicity:

  • Dataset: You can use logos and descriptions from open datasets like LogoDet-3K or Open Logos.
  • Preprocessing: Resize all images to 512x512 pixels and organize them with captions that describe each logo’s style.

5. Fine-tuning Stable Diffusion with python

To customize Stable Diffusion, we’ll use DreamBooth, a method for adapting Stable Diffusion to specific visual traits by adding new data to the model's latent space.

Step 1: Import libraries and model

from peft import get_peft_model, LoraConfig, TaskType
from tqdm.auto import tqdm
import os
import zipfile
from PIL import Image
import torch
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms
from transformers import CLIPTextModel, CLIPTokenizer
from diffusers import StableDiffusionPipeline, UNet2DConditionModel, AutoencoderKL, DDPMScheduler
from accelerate import Accelerator


Step 2: Prepare dataset

1- Extract file from zip file to data/inputs
# Unzip the dataset
zip_path = "./logos project.zip"
unzip_path = "./data/inputs"
# Open the zip file
with zipfile.ZipFile(zip_path, 'r') as zip_ref:
    # List all files in the zip archive
    all_files = zip_ref.infolist()
    total_files = len(all_files)
    
    # Extract each file with a progress bar
    with tqdm(total=total_files, desc="Extracting files") as pbar:
        for file in all_files:
            zip_ref.extract(file, unzip_path)
            pbar.update(1)  # Update the progress bar for each extracted file
2- Prepare Data preprocessing class
class LogoDataset(Dataset):
    def __init__(self, root_dir, transform=None):
        self.root_dir = root_dir
        self.transform = transform
        self.image_paths = self._load_images()

    def _load_images(self):
        image_data = []
        for category in os.listdir(self.root_dir):
            category_path = os.path.join(self.root_dir, category)
            if not os.path.isdir(category_path):
                continue
            for brand in os.listdir(category_path)[:5]:
                brand_path = os.path.join(category_path, brand)
                if not os.path.isdir(brand_path):
                    continue
                images = os.listdir(brand_path)[:2]  # Limit to 3 images per brand
                for img in images:
                    image_path = os.path.join(brand_path, img)
                    prompt = f"A logo for {brand} in {category} style"
                    image_data.append((image_path, prompt))
        return image_data

    def __len__(self):
        return len(self.image_paths)

    def __getitem__(self, idx):
        img_path, prompt = self.image_paths[idx]
        image = Image.open(img_path).convert("RGB")  # Load image as RGB
        
        # Apply transformations (including conversion to tensor)
        if self.transform:
            image = self.transform(image)
        return image, prompt

3- Create Data loader

# Define your transformations
transform = transforms.Compose([
    transforms.Resize((128, 128)),  # Resize to the model's expected input size
    transforms.ToTensor(),           # Convert PIL Image to tensor
    transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])  # Normalize to range [-1, 1]
])
# Set up the dataset and dataloader
train_dataset = LogoDataset(root_dir=f"{unzip_path}/logos project/train", transform=transform)
train_loader = DataLoader(train_dataset, batch_size=4, shuffle=True)


Step 3: Fine tune Stable Decision model using LoRA

# Load Stable Diffusion Model and Configure LoRA on UNet
model_name = "CompVis/stable-diffusion-v1-4"
accelerator = Accelerator()
pipe = StableDiffusionPipeline.from_pretrained(model_name).to(accelerator.device)
tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14")
text_encoder = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14")
# LoRA Configuration
lora_config = LoraConfig(
    r=8,                      # LoRA rank
    lora_alpha=16,            # Scaling factor
    lora_dropout=0.1,         # Dropout rate
    task_type="UNet",         # Targeting the UNet of Stable Diffusion
    target_modules=["attention", "conv_in", "conv_out"]  # Update to reflect your model architecture
)
pipe.unet = get_peft_model(pipe.unet, lora_config)  # Apply LoRA to the UNet


Step 4: Launch training loop

# Training Loop
epochs = 1  # Adjust as needed
for epoch in range(epochs):
    for batch in train_loader.dataset:
        images, prompts = batch
        images = images.to(accelerator.device)
        
        # Encode text prompts
        input_ids = tokenizer(prompts, return_tensors="pt", padding=True).input_ids.to(accelerator.device)
        prompts = prompts  # This should be a list of prompts

        # Generate images using the pipeline
        outputs = pipe(prompt=prompts, pixel_values=images, num_inference_steps=3)  # Make sure 'prompts' is defined
        # Process the outputs as needed
        generated_images = outputs.images  # or however your pipeline defines the outputs

        print(f"Processed batch with prompt: {prompts}")

6. Generating custom logos with stable diffusion

Once fine-tuning completes, the model can generate brand-aligned logos. We can experiment with different text prompts to guide the model.

text_prompt = "Modern tech logo"
generated_logo = pipe(prompt=text_prompt, num_inference_steps=3).images[0]
generated_logo.show()  # Display generated logo

Result:

The logo is generated based on our finetuned model, it's not perfect so more the prompts and images are clean and the model is finetude based on a larger number of data, more we'll get a great logo generation.

7. Evaluating and iterating on results

To improve the model, review the generated logos and refine prompts or adjust training parameters. Additionally, introducing new training examples with desired styles can enhance precision. Also you can optimize execution time by using GPU instead of CPU and by not limiting dataset size to 5 brands per category and 2 images per brand.

Conclusion

With generative AI, we can streamline logo design, adapting the model to reflect specific styles or themes. This example showcases how a free generative AI model like Stable Diffusion can be transformed into a powerful, tailored tool for visual branding with accessible tools and frameworks in Python

Master AI Tools in Just 5 Minutes a Day

Join 1000+ Readers and Learn How to Leverage AI to Boost Your Productivity and Accelerate Your Career

Newsletter language