Skip to content

Instantly share code, notes, and snippets.

View missflash's full-sized avatar
๐Ÿ™ƒ
I may be slow to respond.

Sang Hun Kim missflash

๐Ÿ™ƒ
I may be slow to respond.
View GitHub Profile
@younesbelkada
younesbelkada / finetune_llama_v2.py
Last active April 7, 2025 18:27
Fine tune Llama v2 models on Guanaco Dataset
# coding=utf-8
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
Getting started with SageMaker
https://docs.aws.amazon.com/sagemaker/latest/dg/gs.html
* Lab 1: Image Classification:
* Traffic Sign classification
* https://github.com/aws-samples/aws-ml-vision-end2end/
* Lab 2: Transfer Learning
* https://s3.amazonaws.com/smallya-test/mxnet-finetune-nb/finetuning-mxnet.zip
@karpathy
karpathy / pg-pong.py
Created May 30, 2016 22:50
Training a Neural Network ATARI Pong agent with Policy Gradients from raw pixels
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """
import numpy as np
import cPickle as pickle
import gym
# hyperparameters
H = 200 # number of hidden layer neurons
batch_size = 10 # every how many episodes to do a param update?
learning_rate = 1e-4
gamma = 0.99 # discount factor for reward
@ihoneymon
ihoneymon / how-to-write-by-markdown.md
Last active April 28, 2025 04:15
๋งˆํฌ๋‹ค์šด(Markdown) ์‚ฌ์šฉ๋ฒ•

[๊ณตํ†ต] ๋งˆํฌ๋‹ค์šด markdown ์ž‘์„ฑ๋ฒ•

์˜์–ด์ง€๋งŒ, ์กฐ๊ธˆ ๋” ์ƒ์„ธํ•˜๊ฒŒ ๋งˆํฌ๋‹ค์šด ์‚ฌ์šฉ๋ฒ•์„ ์•ˆ๋‚ดํ•˜๊ณ  ์žˆ๋Š”
"Markdown Guide (https://www.markdownguide.org/)" ๋ฅผ ๋ณด์‹œ๋Š” ๊ฒƒ์„ ์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค. ^^

์•„, ๊ทธ๋ฆฌ๊ณ  ๋งˆํฌ๋‹ค์šด๋งŒ์œผ๋กœ ํ‘œํ˜„์ด ๋ถ€์กฑํ•˜๋‹ค๊ณ  ๋А๋ผ์‹ ๋‹ค๋ฉด, HTML ํƒœ๊ทธ๋ฅผ ํ™œ์šฉํ•˜์‹œ๋Š” ๊ฒƒ๋„ ์ข‹์Šต๋‹ˆ๋‹ค.

1. ๋งˆํฌ๋‹ค์šด์— ๊ด€ํ•˜์—ฌ