This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
cat data.csv | awk -F',' '{print " SET \""$1"\" \""$2"\" \n"}' | redis-cli --pipe |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
def read_lines_from_file_as_data_chunks(file_name, chunk_size, callback, return_whole_chunk=False): | |
""" | |
read file line by line regardless of its size | |
:param file_name: absolute path of file to read | |
:param chunk_size: size of data to be read at at time | |
:param callback: callback method, prototype ----> def callback(data, eof, file_name) | |
:return: | |
""" | |
def read_in_chunks(file_obj, chunk_size=5000): |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# download | |
wget http://xxx.com/upload/20180419/79bf8642d29b9d51a5bebb8ddd0ea926/79bf8642d29b9d51a5bebb8ddd0ea926.m3u8 | |
aria2c -x 4 -j 4 -Z -P http://xxx.com/upload/20180419/79bf8642d29b9d51a5bebb8ddd0ea926/79bf8642d29b9d51a5bebb8ddd0ea926[000-286].ts | |
# decode (example) | |
#openssl aes-128-cbc -d -K 15D0F46608409DA364E3F5D92BDE9F61 -iv 00000000000000000000000000000000 -nosalt -in G00000000.ts -out G00000000.d.ts | |
# join all ts files | |
cat *.ts > out.ts |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/bash | |
case $# in | |
0) | |
echo "Usage: $0 {start|stop}" | |
exit 1 | |
;; | |
1) | |
case $1 in | |
start) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Source: http://datahugger.org/datascience/setting-up-hadoop-v2-with-spark-v1-on-osx-using-homebrew/ | |
This post builds on the previous setup Hadoop (v1) guide, to explain how to setup a single node Hadoop (v2) cluster with Spark (v1) on OSX (10.9.5). | |
Apache Hadoop is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. The Apache Hadoop framework is composed of the following core modules: | |
HDFS (Distributed File System): a distributed file-system that stores data on commodity machines, providing very high aggregate bandwidth across the cluster. | |
YARN (Yet A |