type below:
brew update
brew install redis
To have launchd start redis now and restart at login:
brew services start redis
type below:
brew update
brew install redis
To have launchd start redis now and restart at login:
brew services start redis
For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.
Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon
with HyperThreading enabled, but it can work without problem on slower machines.
You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.
# https://www.nginx.com/blog/tuning-nginx/ | |
worker_connections 1024; | |
# Limit the number of connections NGINX allows, for example from a single client | |
# IP address. Setting them can help prevent individual clients from opening too | |
# many connections and consuming too many resources. | |
server { | |
# When several limit_conn directives are specified, any configured limit will apply. | |
limit_conn perip 10; | |
limit_conn perserver 100; |
# | |
# Sample nginx.conf optimized for EC2 c1.medium to xlarge instances. | |
# Also look at the haproxy.conf file for how the backend is balanced. | |
# | |
user "nginx" "nginx"; | |
worker_processes 10; | |
error_log /var/log/nginx_error.log info; |
service cloud.firestore { | |
match /databases/{database}/documents { | |
// USERS // | |
function isAuthenticated() { | |
return request.auth != null; | |
} | |
function userExists(uid) { |