Last active
October 7, 2022 14:02
-
-
Save celestialphineas/859b87fe082b078d71811d1b5b64b3bc to your computer and use it in GitHub Desktop.
pix2pix-for-roman2italic.ipynb
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"metadata": { | |
"id": "view-in-github", | |
"colab_type": "text" | |
}, | |
"source": [ | |
"<a href=\"https://colab.research.google.com/gist/celestialphineas/859b87fe082b078d71811d1b5b64b3bc/pix2pix-for-roman2italic.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": { | |
"id": "7wNjDKdQy35h" | |
}, | |
"source": [ | |
"# Install" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": { | |
"id": "TRm-USlsHgEV" | |
}, | |
"outputs": [], | |
"source": [ | |
"!git clone https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 3, | |
"metadata": { | |
"id": "Pt3igws3eiVp" | |
}, | |
"outputs": [], | |
"source": [ | |
"import os\n", | |
"os.chdir('pytorch-CycleGAN-and-pix2pix/')" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": { | |
"id": "z1EySlOXwwoa" | |
}, | |
"outputs": [], | |
"source": [ | |
"!pip install -r requirements.txt" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": { | |
"id": "8daqlgVhw29P" | |
}, | |
"source": [ | |
"# Datasets\n", | |
"\n", | |
"Download one of the official datasets with:\n", | |
"\n", | |
"- `bash ./datasets/download_pix2pix_dataset.sh [cityscapes, night2day, edges2handbags, edges2shoes, facades, maps]`\n", | |
"\n", | |
"Or use your own dataset by creating the appropriate folders and adding in the images. Follow the instructions [here](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/docs/datasets.md#pix2pix-datasets)." | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": { | |
"id": "vrdOettJxaCc" | |
}, | |
"outputs": [], | |
"source": [ | |
"!wget https://github.com/celestialphineas/misc/releases/download/v0.0.1/roman2ital.zip -P ./datasets\n", | |
"!(cd datasets && unzip -q roman2ital.zip)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": { | |
"id": "yFw1kDQBx3LN" | |
}, | |
"source": [ | |
"# Training\n", | |
"\n", | |
"- `python train.py --dataroot ./datasets/facades --name facades_pix2pix --model pix2pix --direction BtoA`\n", | |
"\n", | |
"Change the `--dataroot` and `--name` to your own dataset's path and model's name. Use `--gpu_ids 0,1,..` to train on multiple GPUs and `--batch_size` to change the batch size. Add `--direction BtoA` if you want to train a model to transfrom from class B to A." | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": { | |
"id": "0sp7TCT2x9dB" | |
}, | |
"outputs": [], | |
"source": [ | |
"!python train.py --dataroot ./datasets/roman2ital --name roman2ital --model pix2pix\\\n", | |
" --use_wandb\\\n", | |
" --input_nc 3 --output_nc 3\\\n", | |
" --direction AtoB --load_size 256 --no_flip" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": { | |
"id": "9UkcaFZiyASl" | |
}, | |
"source": [ | |
"# Testing\n", | |
"\n", | |
"- `python test.py --dataroot ./datasets/facades --direction BtoA --model pix2pix --name facades_pix2pix`\n", | |
"\n", | |
"Change the `--dataroot`, `--name`, and `--direction` to be consistent with your trained model's configuration and how you want to transform images.\n", | |
"\n", | |
"> from https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix:\n", | |
"> Note that we specified --direction BtoA as Facades dataset's A to B direction is photos to labels.\n", | |
"\n", | |
"> If you would like to apply a pre-trained model to a collection of input images (rather than image pairs), please use --model test option. See ./scripts/test_single.sh for how to apply a model to Facade label maps (stored in the directory facades/testB).\n", | |
"\n", | |
"> See a list of currently available models at ./scripts/download_pix2pix_model.sh\n", | |
"\n", | |
"### **Note that we didn't provide test data for roman2ital.**" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": { | |
"id": "mey7o6j-0368" | |
}, | |
"outputs": [], | |
"source": [ | |
"!ls checkpoints/" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": { | |
"id": "uCsKkEq0yGh0" | |
}, | |
"outputs": [], | |
"source": [ | |
"!python test.py --dataroot ./datasets/roman2ital --direction AtoB --model pix2pix --name AtoB --use_wandb" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": { | |
"id": "OzSKIPUByfiN" | |
}, | |
"source": [ | |
"# Visualize" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": { | |
"id": "9Mgg8raPyizq" | |
}, | |
"outputs": [], | |
"source": [ | |
"import matplotlib.pyplot as plt\n", | |
"\n", | |
"# img = plt.imread('./results/roman2ital/test_latest/images/xxxxxx_fake_B.png')\n", | |
"# plt.imshow(img)" | |
] | |
} | |
], | |
"metadata": { | |
"accelerator": "GPU", | |
"colab": { | |
"collapsed_sections": [], | |
"provenance": [], | |
"include_colab_link": true | |
}, | |
"environment": { | |
"name": "tf2-gpu.2-3.m74", | |
"type": "gcloud", | |
"uri": "gcr.io/deeplearning-platform-release/tf2-gpu.2-3:m74" | |
}, | |
"kernelspec": { | |
"display_name": "Python 3", | |
"language": "python", | |
"name": "python3" | |
}, | |
"language_info": { | |
"codemirror_mode": { | |
"name": "ipython", | |
"version": 3 | |
}, | |
"file_extension": ".py", | |
"mimetype": "text/x-python", | |
"name": "python", | |
"nbconvert_exporter": "python", | |
"pygments_lexer": "ipython3", | |
"version": "3.7.10" | |
} | |
}, | |
"nbformat": 4, | |
"nbformat_minor": 0 | |
} |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment