Feature Name: (fill me in with a unique identity, myawesomefeature)
Type: (feature, enhancement)
Start Date: (fill me in with today's date, YYYY-MM-DD)
Author: (your names)
// SPDX-License-Identifier: MIT | |
pragma solidity ^0.8.0; | |
import "./TrainingToken.sol"; | |
contract DEX { | |
TrainingToken token; | |
address public tokenAddress; | |
event TokensBought(uint256 amount,uint256 etherAmount,address _buyer); |
sass/ | |
| | |
|– base/ | |
| |– _reset.scss # Reset/normalize | |
| |– _typography.scss # Typography rules | |
| ... # Etc… | |
| | |
|– components/ | |
| |– _buttons.scss # Buttons | |
| |– _carousel.scss # Carousel |
Dockerfile
that is based on your production image and
simply install xdebug
into it. Exemple:FROM php:5
RUN yes | pecl install xdebug \
&& echo "zend_extension=$(find /usr/local/lib/php/extensions/ -name xdebug.so)" > /usr/local/etc/php/conf.d/xdebug.ini \
/******************************************************************************* | |
1. DEPENDENCIES | |
*******************************************************************************/ | |
var gulp = require('gulp'); // gulp core | |
sass = require('gulp-sass'), // sass compiler | |
uglify = require('gulp-uglify'), // uglifies the js | |
jshint = require('gulp-jshint'), // check if js is ok | |
rename = require("gulp-rename"); // rename files | |
concat = require('gulp-concat'), // concatinate js |
Yesterday I upgraded our running elasticsearch cluster on a site which serves a few million search requests a day, with zero downtime. I've been asked to describe the process, hence this blogpost.
To make it more complicated, the cluster was running elasticsearch version 0.17.8 (released 6 Oct 2011) and I upgraded it to the latest 0.19.10. There have been 21 releases between those two versions, with a lot of functional changes, so I needed to be ready to roll back if necessary.
We run elasticsearch on two biggish boxes: 16 cores plus 32GB of RAM. All indices have 1 replica, so all data is stored on both boxes (about 45GB of data). The primary data for our main indices is also stored in our database. We have a few other indices whose data is stored only in elasticsearch, but are updated once daily only. Finally, we store our sessions in elasticsearch, but active sessions are cached in memcached.
#!/bin/bash | |
# herein we backup our indexes! this script should run at like 6pm or something, after logstash | |
# rotates to a new ES index and theres no new data coming in to the old one. we grab metadatas, | |
# compress the data files, create a restore script, and push it all up to S3. | |
TODAY=`date +"%Y.%m.%d"` | |
INDEXNAME="logstash-$TODAY" # this had better match the index name in ES | |
INDEXDIR="/usr/local/elasticsearch/data/logstash/nodes/0/indices/" | |
BACKUPCMD="/usr/local/backupTools/s3cmd --config=/usr/local/backupTools/s3cfg put" | |
BACKUPDIR="/mnt/es-backups/" | |
YEARMONTH=`date +"%Y-%m"` |
#! /bin/sh | |
# ------------------------------------------------------------------------------ | |
# SOME INFOS : fairly standard (debian) init script. | |
# Note that node doesn't create a PID file (hence --make-pidfile) | |
# has to be run in the background (hence --background) | |
# and NOT as root (hence --chuid) | |
# | |
# MORE INFOS : INIT SCRIPT http://www.debian.org/doc/debian-policy/ch-opersys.html#s-sysvinit | |
# INIT-INFO RULES http://wiki.debian.org/LSBInitScripts | |
# INSTALL/REMOVE http://www.debian-administration.org/articles/28 |