- A/B testing
- Ron Kohavi's Trustworthy Online Controlled Experiments:
Features are built because teams believe they are useful, yet in many domains most ideas fail to improve key metrics. Only one third of the ideas tested at Microsoft improved the metric(s) they were designed to improve (Kohavi, Crook and Longbotham 2009). Success is even harder to find in well-optimized domains like Bing and Google, whereby some measures’ success > rate is about 10–20% (Manzi 2012).
- Ron Kohavi's Trustworthy Online Controlled Experiments:
import argparse | |
import logging | |
from pathlib import Path | |
from typing import Any, Iterable, Sequence | |
import backoff | |
import snowflake.connector.errors | |
from .snowflake_connection import get_snowflake_connection |
#!/usr/bin/env pwsh | |
<# | |
.Synopsis | |
Creates a PlantUML ERD diagram from a manifest.json file. | |
.Parameter Path | |
Path to the manifest.json file. Defaults to .\target\manifest.json | |
#> | |
[CmdletBinding()] | |
param ( |
Suppose there's a stream of multi-stage requests for documents with given IDs to be retrieved from source, contrast-adjusted, OCRed, word-counted, placed in a destination directory, etc. (the exact stages can vary from request to request). The request stages have to be done in order and intermediate results in the cache should be reused (don't want to keep OCRing the same document over and over). I'd like to create a stream of requests for the individual applications (document retrieval app, OCR app, etc.). How can I do that with ksqlDB?
Let's represent both requests and products as arrays of steps it takes to create them. In the worst case, assume each step can contain custom information, so then the representation should be of the type ARRAY<MAP<STRING,STRING>>
. Using an array as a key is not currently supported in ksqlDB, but we can 'cheat' by using the array cast as string as the key.
Let's break down the logic first:
- Any new request should trigger the requests of all its prerequisites.
- Any
Suppose you want to insert values from one ksqlDB stream into another while auto-incrementing some integer value in the destination stream.
First, create the two streams:
CREATE STREAM dest (ROWKEY INT KEY, i INT, x INT) WITH (kafka_topic='test_dest', value_format='json', partitions=1);
CREATE STREAM src (x INT) WITH (kafka_topic='test_src', value_format='json', partitions=1);
#!/bin/bash | |
# Script for installing tmux on systems where you don't have root access. | |
# tmux will be installed in $HOME/local/bin. | |
# It's assumed that wget and a C/C++ compiler are installed. | |
# exit on error | |
set -e | |
TMUX_VERSION=2.5 |