Skip to content

Instantly share code, notes, and snippets.

View raymondtay's full-sized avatar
:shipit:
Focusing

Raymond Tay raymondtay

:shipit:
Focusing
View GitHub Profile
@raymondtay
raymondtay / VTPrinter.cpp
Created September 21, 2025 03:48
Driver program for the ValueTracking pass
#include "llvm/ADT/APInt.h"
#include "llvm/ADT/FloatingPointMode.h"
#include "llvm/Analysis/AssumptionCache.h"
#include "llvm/Analysis/DemandedBits.h"
#include "llvm/Analysis/TargetLibraryInfo.h"
#include "llvm/Analysis/ValueTracking.h"
#include "llvm/Config/llvm-config.h" // LLVM_VERSION_MAJOR
#include "llvm/IR/DataLayout.h"
#include "llvm/IR/Dominators.h"
#include "llvm/IR/Function.h"
@raymondtay
raymondtay / test.ll
Created September 21, 2025 03:46
The test file against the ValueTracking pass
; ModuleID = 'test'
source_filename = "test.c"
target datalayout = "e-m:e-i64:64-n32:64"
declare void @llvm.assume(i1)
define i32 @demo(i32 %x, i32 %y) {
entry:
; Assume x >= 1 (=> nonzero, clears sign info)
%cmp = icmp sge i32 %x, 1

Understanding Comparative Benchmarks

I'm going to do something that I don't normally do, which is to say I'm going to talk about comparative benchmarks. In general, I try to confine performance discussion to absolute metrics as much as possible, or comparisons to other well-defined neutral reference points. This is precisely why Cats Effect's readme mentions a comparison to a fixed thread pool, rather doing comparisons with other asynchronous runtimes like Akka or ZIO. Comparisons in general devolve very quickly into emotional marketing.

But, just once, today we're going to talk about the emotional marketing. In particular, we're going to look at Cats Effect 3 and ZIO 2. Now, for context, as of this writing ZIO 2 has released their first milestone; they have not released a final 2.0 version. This implies straight off the bat that we're comparing apples to oranges a bit, since Cats Effect 3 has been out and in production for months. However, there has been a post going around which cites various compar

module Origami where
-- Origami is the japanese art of folding and unfolding
--
import Data.Bifunctor
--
-- Origami Programming refers to a style of generic programming that focuses on leveraging core patterns
-- of recursion: map, fold and unfold.
{-# LANGUAGE GADTs #-}
import Numeric
data Expr a where
I :: Int -> Expr Int
B :: Bool -> Expr Bool
Add :: Expr Int -> Expr Int -> Expr Int
Mul :: Expr Int -> Expr Int -> Expr Int
Eq :: Eq a => Expr a -> Expr a -> Expr Bool
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE TypeFamilies #-}
import Control.Concurrent.STM
import Control.Concurrent.MVar
import Data.Foldable (forM_)
import Data.IORef
-- class Closet closet where
-- newClosetIO :: a -> IO (closet a)
object IO {
def cancelable[A](k: (Either[Throwable, A] => Unit) => CancelToken[IO]): IO[A]
}
// The type of the transducer function happens to resemble the folding function
// you would normally pass to `foldLeft` or `foldRight`. Here's an example
// using the following:
// {{{
// val xs = List("1", "2", "3")
// scala> xs.foldLeft
// override def foldLeft[B](z: B)(op: (B, String) => B): B
// }}}
//
// The type of `op` is exactly that of the type `RF` - what a coincidence !
package com.databricks
import java.io.{File, FileNotFoundException}
import com.databricks.backend.daemon.dbutils.{FileInfo, MountInfo}
import com.databricks.dbutils_v1.DbfsUtils
import io.thalesdigital.common.services.DiskService.logger
import org.apache.hadoop.fs.FileSystem
trait LocalFileSystem extends DbfsUtils {
import org.apache.spark.sql.DataFrameReader
def configureReaderWithElasticSearch(options: Map[String,String]) : DataFrameReader = {
val fr = spark.read.format("es")
def go(fr: DataFrameReader, optMap: Map[String, String]) =
optMap.foldLeft(fr)((reader, c) => reader.option(c._1, c._2))
go(fr, options)
}
def loadDataFromElasticSearch(index: String)(reader: DataFrameReader) : DataFrame =