Last active
August 29, 2015 14:18
-
-
Save janko/0fc6b921020b80143c31 to your computer and use it in GitHub Desktop.
Minitest loads *slower* than RSpec?
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
require "benchmark/ips" | |
File.write "minitest_test.rb", <<-EOS | |
require "minitest/autorun" | |
require "minitest/pride" | |
class MintestTest < Minitest::Test | |
def test_foo | |
assert true | |
end | |
end | |
EOS | |
File.write "rspec_spec.rb", <<-EOS | |
require "rspec/autorun" | |
RSpec.describe "RSpec" do | |
it "foo" do | |
expect(true).to eq true | |
end | |
end | |
EOS | |
Benchmark.ips do |x| | |
x.config(time: 10) | |
x.report("rspec") { `ruby rspec_spec.rb` } | |
x.report("minitest") { `ruby minitest_test.rb` } | |
x.compare! | |
end | |
# Calculating ------------------------------------- | |
# rspec 1.000 i/100ms | |
# minitest 1.000 i/100ms | |
# ------------------------------------------------- | |
# rspec 4.545 (± 0.0%) i/s - 46.000 | |
# minitest 3.951 (± 0.0%) i/s - 40.000 | |
# | |
# Comparison: | |
# rspec: 4.5 i/s | |
# minitest: 4.0 i/s - 1.15x slower |
I did, and then Minitest was 1.14x
slower. I just required it to match RSpec's functionality more closely.
But I think there's something wrong with my Mac, because when @zenspider ran the same benchmarks, RSpec turned out to be 1.74x slower to load than Minitest. Maybe you can try running them.
@janko-m on my 2014 MBP
This is the original script, with pride
☺ ruby spec_bench.rb 2.2.1
Calculating -------------------------------------
rspec 1.000 i/100ms
minitest 1.000 i/100ms
-------------------------------------------------
rspec 4.170 (± 0.0%) i/s - 42.000
minitest 2.613 (± 0.0%) i/s - 27.000
Comparison:
rspec: 4.2 i/s
minitest: 2.6 i/s - 1.60x slower
Thanks for chipping in. It seems like from 5 testers only @zenspider's computer is producing reverse results.
In my fork I've rewritten your benchmark: https://gist.github.com/mislav/8ae7ca40feb7c967ef32#file-readme-md
Turns out there isn't much of a difference. Startup times of these test libraries don't matter anyway; so based on just that it's not really fair to say "X is faster than Y".
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I think you should remove
pride
and run the bm again