A couple of weeks I introducedTriLite, an Sqlite extension for fast string matching. TriLite is still very much under active development and not ready for general purpose use. But over the past few weeks I’ve integrated TriLite into DXR, the source code indexing tool I’ve been working on during my internship at Mozilla. So it’s now possible to search mozilla-central using regular expressions, for an example regexp:/(?i)bug\s+#?[0-9]+/ to find references to bugs.
For those interested, I did my end-of-internship presentation of DXR last Thursday, it’s currently available on air.mozilla.org (I’m third on the list, also posted below). I didn’t reharse it very much so appoligies if it’s doesn’t make any sense. What does make sense however, is that fact that dxr.allizom.org supports substring and regular expression searches, so fast that we can facilitate incremental search.
(My end-of-internship presentation, slides available here)
Anyways, as you might have guessed from the fact that I gave an end-of-internship presentation, my internship is coming to an end. I’ll be flying home to Denmark next Saturday to finish my masters. But I’ll probably continue to actively develop TriLite and make sure this project reaches a level where it can be reused by others. I’ve already seen other suggestions for where substring search could be useful.
The past two weeks I’ve been working on regular expression matching for DXR. For those who doesn’t know it, DXR is the source code cross referencing tool I’m working on during my internship at Mozilla. The idea is to make DXR faster than grep and support full featured regular expressions, such that it can eventually replace MXR. The current search feature in DXR uses the FTS (Full Text Search) extension for Sqlite, with a specialized tokenizer. However, even with this specialized tokenizer DXR can’t do efficient substring matching, nor is there any way to accelerate regular expression matching. Which essentially means DXR can’t support these features, because full table scans are too slow on a server that serves many users. So to facilitate fast string matching and allow restriction on other conditions (ie. table joins), I’ve decided to write an Sqlite extension.
Introducing TriLite, an inverted trigram index for fast string matching in Sqlite. Given a text, a trigram is a substring of 3 characters, the inverted index maps from each trigram to a list of document ids containing the trigram. When evaluating a query for documents with a given substring, trigrams are extracted from the desired substring, and for each such trigram a list of document ids is fetched. Document ids present in all lists are then fetched and tested for the substring, this reduces the number of documents that needs to fetched and tested for the substring. The approach is pretty much How Google Code Search Worked. In fact, TriLite uses re2 a regular expression engine written by the guy who wrote Google Code search.
TriLite is very much a work in progress, currently, it supports insertion and queries using substring and regular expression matching, updates and deletes haven’t been implemented yet. Anyways, compared to the inverted index structure used in the FTS extension, TriLite has a fairly naive implementation, that doesn’t try to provide a decent amortized complexity for insertion. This means that insertion can be rather slow, but maybe I’ll get around to try and do something about that later.
Nevertheless, with the database in memory I’ve been greping over the 60.000 files in mozilla-central in about 30ms. With an index overhead of 80MiB for the 390MiB text in mozilla-central, the somewhat naive inverted index implementation employed in TriLite seems pretty good. For DXR we have a static database so insertion time is not really an issue, as the indexing is done on an offline build server.
As the github page says, TriLite is no where near ready for use by anybody other than me. I’m currently working to deploy a test version of DXR with TriLite for substring and regular expression matching. Something I’m very much hoping to achieve before the end of my internship at Mozilla. So stay tuned, a blog post will follow…
This year I’m spending my summer vacation interning at the Mozilla office in Toronto. The past month I’ve been working on DXR, a source code indexing and cross referencing application. So far I’ve worked to deploy DXR on Mozilla infrastructure and is happy to report that dxr.mozilla.org is no longer redirecting to dxr.lanedo.com. The releng team have a build bot indexing mozilla-central, so that we always have a fresh index on dxr.mozilla.org. So in blind faith that this will never crash horribly, I’ll tell to:
Whilst I’ve been handling most of the deployment issues, the releng team have been doing the heavy lifting when comes to automatic build, thanks! This means, that I’ve been working on cleaning up, refactoring, redesigning and rewriting parts of DXR. This sounds like a lot of behind the scenes work that nobody will ever see, however, this means that DXR now has a decent template system. And well, who would I be if I didn’t write a template for it.
You can find the tip of DXR that I’m working on at dxr.allizom.org, please check out it and let me know if there’s something you don’t like and would like to appear. I realize that the site currently have a few crashes, and don’t worry I won’t rush this into production before I’ve ratted these out. Now, as evident from this blog, I’m by no means a talented designer, so if you feel that grey should be red or whatever, please leave a color code in the comments and I’ll try it out.
I’m currently looking into replacing the full text search and my somewhat buggy text tokenizer with a trigram index that’ll facilitate proper substring matching and if we lucky regular expression matching. But I guess we will have to wait and see how that turns out. In the mean time leave a comment if you have questions, requests, suggestions and/or outcries.
I can’t say that I have been very active on this blog the past couple of years, nevertheless when I decided to hack a little on theme this morning I found an infestation of hidden links to all sorts of crap. Apparently this sort of thing is not unheard of on wordpress blog.
Anyways, I’ve been planning to slowly out phase the “jopsen” nickname, and having recently bought “jonasfj.dk” I decided to do an export of my wordpress blog. Setup a new one on a new domain. Dreamhost one click install made this rather easy, and I now have automatic wordpress updates.
My old domain “jopsen.dk” will now redirect to “jonasfj.dk”, and I’ve deleted the infected wordpress install. I still have a wiki running on the old domain, but it’s no longer active, so if possible I plan to render it static, at which point I’ll move it from the domain.
This does not mean that I’ll get all rid of the “jopsen” nickname, I’ve got many accounts under that name. But in the future I’ll try to move some of the frequently used accounts to “jonasfj”. I think it’s a little more sensible and gives sort of a hit about who I might be 🙂 Update: Changed my github username from jopsen to jonasfj. Well I think that’s all the renaming I can handle for now.
With respect to the “pharma hack” I think it’s sad that we live in a world where some people have so little respect for my time. After all the ad-space on my blog has no value compared to the costs of cleaning up after this. Worst of all is that these links where hidden, I only discovered them by accident, and I have no idea how long time they’ve there.
Anyways, I’ve disabled ftp, used a different domain, different shell user, clean wordpress install with a Google Authenticator plugin. At least that’s a start.
Whilst gedit isn’t the fastest, smartest or most fancy editor out there, I often find my self using it. With toolbars hidden and menu bar gone (unity), gedit is a neat little thing. It always works, and whilst it lacks many features compared to vim, the features gedit does offer, never pops up because I accidentally pressed some key.
The one thing about gedit that I do however feel is missing, is the ability to modify shortcuts. For example switching tabs with “Ctrl + Tab” and “Ctrl + Shift + Tab”, and closing tabs with “Ctrl + F4”, feels as natural as browsing the web in Firefox.
Luckily, it’s possible to install plugins for gedit, I recently found a plugin call Control Your Tabs, that allows you to switch tabs with “Ctrl + Tab”. However, it does tab switch in most recently used order, instead of switching tabs by order in tabview.
I had a quick look at the source for Control Your Tabs, which turned out to be slightly complicated. While I could hack the source to fit my needs, it turned out to be faster and simpler to just write my own gedit plugin.
So here it is TabControl, 50 lines of python now hosted at github.
The plugin switches tabs with “Ctrl (+ Shift) + Tab” based on their order in the tabview. And on top of this it allows you to close the current tab using “Ctrl + F4”. You can find files and installation instructions in the github repository.
The plugin is really short and simple, so if you want a shuffle feature for switch tabs (or whatever), this could be a good place to start. Otherwise I’d definitely recommend taking a look at the gedit plugin documentation.
A few days ago a diploma dropped in the door, meaning that after 3 years at Aalborg University I’ve got a “Bachelor of Science (BSc) in Computer Science”, as it says on the paper… I’m of course continuing as a master student next – no reason to get out “there” in the real world, where you have to work, assuming you don’t what to starve 🙂
Anyways, it occurred to me that I haven’t blogged about my Bachelor project. So I better get it done now, as I’m off for vacation in California later tonight… This semester we worked in groups of 3, and was supposed to write a 12-20 pages article, as opposed to a 100-200 pages report. Which turned out to be quite challenging, but also somewhat nice, because we got to polish every sentence.
As the title of the post indicates we did a project about Petri nets, a very simple but powerful modeling language. To achieve a “feeling” of novelty we introduced global discrete variables, that you can condition on and modify in transitions, and called our new model for Petri nets With Discrete Variables (PNDV). Whilst, to my knowledge, this have not been done before, we didn’t focus on showing that PNDVs where particularly useful for any specific purpose. So that part of the project feels a little shoehorned, at least to me, and maybe only me, because we all got an A for the project.
Nevertheless, assuming that the PNDV model (we invented) is interesting, then the graphical Petri net editor and verification tool we wrote during this project might also be interesting. In a desperate search for a name we came up with PeTe, yes is spelled pretty weird, but also quite cute 🙂
Given a PNDV or Petri net PeTe can determine if a state satisfying a given formula is possible. This problem is very hard (EXPSPACE-hard), but we had a lot of fun writing different search strategies for exploring the state space of a PNDV. Most notably we found that even quite simple heuristics can provide a huge performance improvement by guiding a search in state space. We also had great success with over-approximation, by using state equation and trap testing to disprove the satisfyability of a formula.
The heuristics and methods implemented in PeTe is presented in the article we wrote, available for download below. This article also present some, in my opinion, rather shoehorned results, like translation from Discrete Timed Arc Petri Net to PNDV. I also can’t help but feel that the direction and goal in the article could have been more clear. But done is done, and I’m off to vacation when I’ve added some links 🙂
Being slightly bored with school work I decided to take day off and work on zbar-sharp, and a few minutes ago I tagged a new release of zbar-sharp. These improvements have been underway for quite a while, and many of them have been available in the repository on github for a long time. And if you plan to use zbar-sharp I’ll recommend that you checkout the git repository once in a while.
The highlights of this release are:
ZBar.Image: Constructor for loading from System.Drawing.Image (e.g. Bitmap import)
ZBar.Image: Convenience function for FourCC codes.
ZBar.Symbol: Support for QR-codes (thanks to ZBar)
GtkZBar.Scanner: Support for rotation (Special thanks to Patrick McEvoy)
ZBar: Version by (Special thanks to Brandon McCaig)
Tests: Unit tests to test zbar-sharp and ZBar.
Improved and reorganized examples…
In particular support for initializing ZBar.Image from an instance of System.Drawing.Image is very nice. Notice that System.Drawing.Bitmap is a subclass of System.Drawing.Image, so this constructor allows you to load images from files into a ZBar.Image that can be scanned.
However, this feature would have been fairly unstable without a fix for the memory management issue in ZBar.Image.Data, which previously caused applications to crash at random. A thanks to the nameless commentor by the name thedarkking, who’s comment finally gave me a clue as to where the bug was.
By the way, if you have any comments or questions don’t be afraid to leave a comment.You’re also welcome to use the issue reporting system at github to report bugs or feature requests.If you wish to contribute, just fork the project on github and push your code.
This semester we did a project in machine intelligence, however, as I find probability calculus and Bayesian networks utterly boring I convinced my group to do a project about triangulation of Bayesian networks. A triangulation of an undirected graph G, is a set of edges called fill-ins, such that G with these fill-ins added is a chordal graph (a graph that doesn’t have a chord-less cycle of more than 3 node). Triangulation is of graphs is used for many purposes, and with difference optimality criteria, such as minimum fill-ins, minimum clique size (tree width) and minimum maximal clique sum (optimal table size), as is relevant for Bayesian networks.
The problem of finding an optimal triangulation is NP-Hard (with respect to mentioned optimality criteria). And since there’s many fairly good simple greedy heuristics, it can be argued that algorithms for optimal triangulation of Bayesian networks are uninteresting. However, to check of a greedy heuristic is good, optimal triangulations are needed. It’s also possible that optimal triangulation might be worthwhile if  computed offline, especially if the application needs to process a lot of data or work on an embedded platform. Or whatever, optimality is always slightly interesting, if not for anything useful, than for the fun of finding them…
In this project we implemented some simple greedy triangulation heuristics, compared their result to the optimal solutions. But most interesting is probably our work on search algorithms for optimal triangulations of Bayesian networks. We based our work on twoarticles by Thorsten J. Ottosen and Finn V. Jensen, who does a best or depth first search in the space of all elimination orders. As far as we’re aware these articles are the only ones that proposes algorithms for optimal triangulation of  Bayesian networks.
In this project we’ve created good improvements to the optimal triangulation algorithms. We reduce the search space by excluding elimination orders resulting in the same triangulation. Our improvements provides around 5x speedup on sparse graphs, and can be applied to regardless of the optimality criterion. I think this project was really cool, because I got to play with something that was new and fairly untouched. And the fact that I managed to propose a a few interesting improvements didn’t make it any less cool 🙂
So after a 9 hours nap on the City Night Line I woke up on my way home, after a few days in Nuremberg where I attended openSUSE Conference 2010. Where I, apart from attending some interesting talks, met some of the cool LibreOffice hackers…
As GSoC student with Go-OO (now LibreOffice) I was invited to say a few words about my GSoC project. I started by defining what visual formula editing is. Then I scratched some of the rather unpleasant problems I encountered when adding this capability to LibreOffice Math.
I also demonstrated the hack, and used a small crash-bug to show that it wasn’t ready for being shipped yet. However, despite the crash-bug, Michael Meeks suggested that we tried to put it in as an experimental feature, that could be enabled at runtime. Whether or not it’s going to make the 3.3 release, time will tell. In any event there’s still a rather long list of things to do before this hack can be considered stable.
I’ve spend most my summer working on my GSoC project, which was to create a visual formula editor for OpenOffice Math. Currently, formulas are entered in OpenOffice Math using a plaintext command language, this can be efficent and easy for power users, however, it’s an absolute show stopper for most casual users. So I’ve spend my summer writing a visual formula editor for OpenOffice Math, you can see demonstration here:
I participated in GSoC for Go OpenOffice, which is a project that maintains a set of patches on top of OpenOffice. Go OpenOffice is the OpenOffice version distributed with OpenSuSE, Ubuntu and other distros, it is allegedly a lot better than the official OpenOffice release. And also available for Windows.
Hacking OpenOffice have been a very exciting experience. I haven’t worked on a project so large and complex before. It easily takes 2 hours to build OpenOffice and the sources and binaries fills about 13 GiB. Luckily I didn’t have to rebuild everytime I had to test something.
The visual formula editor, see video above, is not production ready yet. That is it needs extensive testing and a few extra features… However, I plan to keep working on it. You can read more about it’s features and current status here.
I don’t think I’ll keep updating that wiki page, but rather post some updates here once in a while. If you are eager to help test this feature when it comes that far, feel free to leave a comment with your email…