See rune2e.sh for info on how to run the experiment.
| #!/usr/bin/env bash | |
| # Default values for percentages | |
| DEFAULT_WIRED_LIMIT_PERCENT=85 | |
| DEFAULT_WIRED_LWM_PERCENT=75 | |
| # Read input parameters or use default values | |
| WIRED_LIMIT_PERCENT=${1:-$DEFAULT_WIRED_LIMIT_PERCENT} | |
| WIRED_LWM_PERCENT=${2:-$DEFAULT_WIRED_LWM_PERCENT} |
| # Train GPT-2 in five minutes -- for free | |
| # | |
| # ```bash | |
| # pip install modal | |
| # modal setup | |
| # modal run wrapper.py | |
| # ``` | |
| # | |
| # Note that the end-to-end latency the first time is more like 25 minutes: | |
| # - five minutes to install Torch (rip) |
| # Example usage: | |
| # python merge_peft.py --base_model=meta-llama/Llama-2-7b-hf --peft_model=./qlora-out --hub_id=alpaca-qlora | |
| from transformers import AutoModelForCausalLM, AutoTokenizer | |
| from peft import PeftModel | |
| import torch | |
| import argparse | |
| def get_args(): |
| FROM mcr.microsoft.com/vscode/devcontainers/python:3.8 | |
| ENV DEBIAN_FRONTEND=noninteractive | |
| RUN apt-get update && apt-get -y install --no-install-recommends java-common zip \ | |
| && wget https://corretto.aws/downloads/latest/amazon-corretto-8-x64-linux-jdk.deb \ | |
| && dpkg --install amazon-corretto-8-x64-linux-jdk.deb \ | |
| && wget https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip \ | |
| && unzip awscli-exe-linux-x86_64.zip \ | |
| && ./aws/install \ | |
| # Clean up |
| import json | |
| from subprocess import check_output | |
| from time import sleep | |
| !pip -q install youtube-dl | |
| from google.colab import auth | |
| auth.authenticate_user() | |
| !gcloud config set project kora-id | |
| gcs = 'gs://co-lab/dhamma' # use your own project, gcs |
| pip install streamlit | |
| pip install spacy | |
| python -m spacy download en_core_web_sm | |
| python -m spacy download en_core_web_md | |
| python -m spacy download de_core_news_sm |
A good way to get a taste of Swift for Tensorflow language and tools is to set it up with Jupyter with the fastai Swift notebooks. I wanted a quick setup, which the Mac install experience currently not, so instead I installed the release binaries in a Ubuntu container via Docker. The setup process for this scenario is not well documented, so here it is for you / future me.
What we're about to do is install the S4TF 0.4 release and the fastai v3 Swift notebooks on Ubuntu 18.04. Generally we follow the swift-jupyter docker file, but install cpu-only release versions of the packages.
Below are some of the references I looked at:
https://github.com/tensorflow/swift/blob/master/docs/WhySwiftForTensorFlow.md https://github.com/tensorflow/swift/blob/master/docs/DifferentiableFunctions.md