gunicorn run:app --workers=9
gunicorn run:app --workers=9 --worker-class=meinheld.gmeinheld.MeinheldWorker
Macbook Pro 2015 Python 3.7
Framework | Server | Req/s | Max latency | +/- Stdev |
---|
import os | |
import pickle | |
import warnings | |
import numpy as np | |
import pandas as pd | |
from sklearn.model_selection import train_test_split | |
from tensorflow.keras.callbacks import EarlyStopping | |
from tensorflow.keras.layers import Dense | |
from tensorflow.keras.layers import Dropout |
/* Useful celery config. | |
app = Celery('tasks', | |
broker='redis://localhost:6379', | |
backend='redis://localhost:6379') | |
app.conf.update( | |
CELERY_TASK_RESULT_EXPIRES=3600, | |
CELERY_QUEUES=( | |
Queue('default', routing_key='tasks.#'), |
''' | |
A python script which starts celery worker and auto reload it when any code change happens. | |
I did this because Celery worker's "--autoreload" option seems not working for a lot of people. | |
''' | |
import time | |
from watchdog.observers import Observer ##pip install watchdog | |
from watchdog.events import PatternMatchingEventHandler | |
import psutil ##pip install psutil | |
import os |
gunicorn run:app --workers=9
gunicorn run:app --workers=9 --worker-class=meinheld.gmeinheld.MeinheldWorker
Macbook Pro 2015 Python 3.7
Framework | Server | Req/s | Max latency | +/- Stdev |
---|
#!/bin/sh | |
# This is free and unencumbered software released into the public domain. | |
# | |
# Anyone is free to copy, modify, publish, use, compile, sell, or | |
# distribute this software, either in source code form or as a compiled | |
# binary, for any purpose, commercial or non-commercial, and by any | |
# means. | |
# | |
# In jurisdictions that recognize copyright laws, the author or authors |
def is_cyclic(input_list): | |
""" | |
The intuition is very simple and can be thought of as traversing a double-linked list or tree-traversals. | |
- For the given_list to be cyclic, the first and last chars in words that form the list should match. | |
- Which means, these chars should form even pairs. | |
Thus, this function, | |
1. Creates a new list consisting of only the first and last character of every word in the list. | |
2. Convert the new list into a string. | |
3. Counts the number of occurences of every character in the new string in step 2. |
[tool.poetry] | |
name = "ds-3_6_6_poetry" | |
version = "0.1.0" | |
description = "" | |
authors = ["Vitalis <[email protected]>"] | |
[tool.poetry.dependencies] | |
python = "3.6.6" | |
numpy = "^1.16" | |
pandas = "^0.24.2" |
import math | |
import random | |
import csv | |
import numpy as np | |
import cProfile | |
import hashlib | |
memoization = {} | |
class Clustering: | |
def k_means_clustering(self, n, s=1.0): | |
""" | |
This method performs the K-means clustering algorithm on the data for n iterations. This involves updating the | |
centroids using the mean-shift heuristic n-times and reassigning the patterns to their closest centroids. | |
:param n: number of iterations to complete | |
:param s: the scaling factor to use when updating the centroids | |
pick on which has a better solution (according to some measure of cluster quality) | |
""" |
class Clustering: | |
""" | |
An instance of the Clustering is a solution i.e. a particular partitioning of the (heterogeneous) data set into | |
homogeneous subsets. For Centroid based clustering algorithms this involves looking at each pattern and assigning | |
it to it's nearest centroid. This is done by calculating the distance between each pattern and every centroid and | |
selecting the one with the smallest distance. Here we use are using fractional distance with the default parameters. | |
:param d: dimensionality of the input patterns | |
:param k: the pre-specified number of clusters & centroids | |
:param z: the patterns in the data set |