Skip to content

Instantly share code, notes, and snippets.

# /usr/local/bin/ffprobe -v quiet -print_format json -show_format -show_streams /F55/DR_Z0006.MXF
{
"streams": [
{
"index": 0,
"codec_name": "h264",
"codec_long_name": "H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10",
"profile": "High 4:2:2 Intra",
"codec_type": "video",
"codec_time_base": "1001/48000",
Use <mxf input> '-' for standard input
Options (* means option is required):
-h | --help Show usage and exit
-v | --version Print version info
-p Print progress percentage to stdout
-l <file> Log filename. Default log to stderr/stdout
--log-level <level> Set the log level. 0=debug, 1=info, 2=warning, 3=error. Default is 1
-t <type> Clip type: as02, as11op1a, as11d10, as11rdd9, op1a, avid, d10, rdd9, as10, wave. Default is op1a
* -o <name> as02: <name> is a bundle name
as11op1a/as11d10/op1a/d10/rdd9/as10/wave: <name> is a filename
# Ref: https://arriwebgate.com/en/directlink/e631dcb5b6ac8eb5
# Filename: B001C008_180327_R1ZA.mov
# Size: 4.76GB
{
"streams": [
{
"index": 0,
"codec_name": "prores",
"codec_long_name": "Apple ProRes (iCodec Pro)",
@brimestoned
brimestoned / MBSFunctions.notes
Last active March 1, 2019 23:28
MBS calls for VideoJS
# Toggle Play/Pause
MBS( "WebView.RunJavaScript"; "webView"; "togglePlay();" )
# Play video
MBS( "WebView.RunJavaScript"; $WebView ; "_play();" )
# Pause video
MBS( "WebView.RunJavaScript"; $WebView ; "_paused();" )
# Jump from current location
@brimestoned
brimestoned / transient-clustering-gnu-parallel-sshfs.md
Created February 28, 2019 20:03 — forked from Brainiarc7/transient-clustering-gnu-parallel-sshfs.md
How to set up a transient cluster using GNU parallel and SSHFS for distributed jobs (such as FFmpeg media encodes)

Transient compute clustering with GNU Parallel and sshfs:

GNU Parallel is a multipurpose program for running shell commands in parallel, which can often be used to replace shell script loops,find -exec, and find | xargs. It provides the --sshlogin and --sshloginfile options to farm out jobs to multiple hosts, as well as options for sending and retrieving static resources and and per-job input and output files.

For any particular task, however, keeping track of which files need to pushed to and retrieved from the remote hosts is somewhat of a hassle. Furthermore, cancelled or failed runs can leave garbage on the remote hosts, and if input and output files are large, sending them to local disk on the remote hosts is somewhat inefficient.

In a traditional cluster, this problem would be solved by giving all nodes access to a shared filesystem, usually with NFS or something more exotic. However, NFS doesn't wo