Some SQLite 3.7 Benchmarks
Since I wrote the benchmarks for insertions in my last post, SQLite 3.7 has been released. I figured it’d be interesting to see if 3.7 changed the situation at all. Prepared Statements The specific versions compared here are 3.6.23.1 and 3.7.3. I ran the prepared statements benchmark as is without changing any source code. Both […]
In: C++ · Tagged with: benchmarks, sqlite
Fast Bulk Inserts into SQLite
Background Sometimes it’s necessary to get information into a database quickly. SQLite is a light weight database engine that can be easily embedded in applications. This will cover the process of optimizing bulk inserts into an SQLite database. While this article focuses on SQLite some of the techniques shown here will apply to other databases. […]
In: C++ · Tagged with: benchmarks, optimization, sqlite
Nightly Benchmarks: Tracking Results with Codespeed
Background Codespeed is a project for tracking performance. I discovered it when the PyPy project started using Codespeed to track performance. Since then development has been done to make its setup easier and provide more display options. Anyway, two posts ago I talked about running nightly benchmarks with Hudson. Then in the previous post I […]
In: Uncategorized · Tagged with: benchmarks, continuous integration, jruby
Nightly Benchmarks: Setting up Hudson
For some projects, finding out about performance regressions is important. I’m going to write a two part series about setting up a nightly build machine and displaying the generated data. This part is going to cover installation of Hudson, and getting the benchmarks running nightly. I decided to give Hudson a try because I had […]
In: Uncategorized · Tagged with: benchmarks, continuous integration, jruby
Sort Optimization (Part 2) with JDK 6 vs JDK 7
In part 1, I went over my first foray into the world of sorting algorithms. Since then, I’ve had some other ideas on how to improve my quicksort implementation. One idea that I had while originally working on the sorting algorithm, was to rework the partition function to take into account duplicate elements. I had […]