wine
wrapper that wraps wine,Crossover etc. into a single unified interface, with full bottle support. Thisversion contains several improvements compared to 0.2. It adds additionalinformation to the output of the --kill
and --drykill
commands. These now liststhe bottle and wine flavour the process belongs to when used under Linux. It’salso now possible to combine --kill
or --drykill
with --wine
and/or --bottle
,allowing targeted kills of only a single bottle instead of all runningprocesses.This version also introduces support for wine packages installed viaPlayOnLinux. PlayOnLinux supplies many prebuiltwine-packages, easing wine installation and allowing several to be installedside-by-side. wwine reads from the list of installed PlayOnLinux wine packagesand adds those to the list of available –wine versions. It also adds supportfor using PlayOnLinux bottles, so wwine may be used on games and softwareinstalled through PlayOnLinux. Users may now use wwine --list
to listavailable wine versions.
Lastly several minor tweaks were made. Including improved support for GameTreeLinux, the addition of --cxg-installdir
(which compliments the existing--cxinstalldir
). A crash that could occur when --wrap
was combined with --winewine
without also supplying a --bottle
was fixed, and the deprecated parameter--env-bottle
was removed.
It is available for download from the wwine website.
]]>gpgpwd
.It is a simple password manager for people that live on the command-line. Itstores a list of passwords in a GnuPG encrypted file, which it then providesan interface for retrieving, adding, changing and removing entries from.The basics are simple: gpgpwd set X
sets the entry for X in the file. Thepassword is not accepted on the command-line, but will be requestedinteractively to avoid it showing up in ps
. gpgpwd
will provide you witha randomly generated password that you can use, or you can provide your own.gpgpwd get X
retrieves the entry for X (and copies it to the X11 clipboardunless --no-xclip
is supplied) - this command takes a regex, so you don’thave to type out the entire name each time. Finally gpgpwd remove X
removesthe entry for X.
In addition to these basic commands it also has a few extra features to makelife a bit more convenient, the main one being git
support. You can tellgpgpwd that your password file is inside git, which will make gpgpwd git pull
before reading the file, and git commit
+git push
after writing to it,allowing you to easily synchronize your password between several computers.It also has a command that will let you batch add passwords from a file, ifyou’re importing passwords you already have stored.
You can grab it fromits website, where you canalso read the full
The most interesting new feature however, is the addition of the \–env and\–tricks parameters. \–env causes wwine to set the WINE and WINEPREFIXvariables to the syntax used by vanilla wine, this allows various wine scriptsthat use those to be able to run using wwine’s bottles, as well as withcrossover. The most interesting use of this is the ability to use winetrickswith Crossover. This effectively lets you use Crossover as any other winerelease, while still using Crossover’s bottles. So if you have winetricks andCrossover installed, you need only run wwine -w cx -b BOTTLE --tricksACTION
to use winetricks with Crossover.
Octopress is “A blogging framework for hackers”. It’s based onjekyll, a ruby framework for generating staticwebsites. What sold me on it is the ability to use the best blog-writing clientavailable, a normal shell along with vim. Blog posts arewritten in markdown (bydefault), the source is in git, and it comes with rake targets for creating newposts, building it and deploying it.
As this should make it a lot easier to write posts, I’m hoping that it justmight make me post more often.
]]>Outside of that, the standalone build has been stripped down to the bare necessities, shrinking the minified standalone build of jQsimple-class from 10KiB to 3KiB (the version that uses jQuery is just 1.5KiB).
]]>jClass()
takes a single parameter, a JavaScript hash, where keys are method or attribute names, and the values are any valid JavaScript type. jClass.extend()
lets you build a class that extends one or more existing classes. jClass.virtual()
lets you construct a “virtual” class. That is to say, a class that can not be instantiated, but that can be extended by others.
Internally jQsimple-class uses some jQuery methods, but it does not depend upon jQuery to be used, a standalone version that bundles the parts it needs (not all of jQuery, and without exposing them to the public namespace) is available for applications that do not use jQuery. I have written an extensive testsuite for jQsimple-class to make sure that things work as they should, and it works across all modern browsers.
For more examples and the full API, see the jQsimple-class documentation. jQsimple-class version 0.1 is available for download now. Minified it is only 1.5K (or 9K for the standalone version). Any feedback is welcome, feel free to do so in the comments, or, if you find a bug, on the bugtracker.
]]>It uses Term::ReadLine, which gives a simple session history if you have a Term::ReadLine::* implementation that supports it. It will also use Data::Dumper so that you can quickly see any data structures, you can always use scalar(STATEMENT) if the return value differs in list and scalar context.
Here’s an alias that can be shoved into .bashrc : alias ‘perl-repl’=’perl -MData::Dumper -MTerm::ReadLine -e ‘'‘$r = Term::ReadLine->new(1);while(defined($_ = $r->readline(“code: “))){$ret=Dumper(eval($_));$err=$@;if($err ne “”){print $err;}else{print $ret;}}’'‘’
]]>Other than that I extended the command-line parser, so you can now say “swec example.com -s /test.html
“ where you would previously have had to do “swec --baseurl example.com -s /test.html
“. Other than that it’s mostly a bunch of cleanups, some refactoring and a few minor bugfixes, in addition to a new test suite so the thing can be properly sanity-checked before release.
If you need to sanity check dynamic websites, give SWEC a go.
]]>perldoc
and ri
, so documentation is a quick command away in any of my terminals, which thanks to screen is never fewer than ten. PHP however, has no such tool, the docs are in HTML and many distros don’t even package the HTML docs. So, to avoid the pain of switching out of the safety of my terminal and into a web browser all the time, and speed up my work, I wrote an app, phpdocr. It’s quite simple, it scrapes php.net (and caches the result for quick viewing later) and displays the parsed HTML in your pager - resulting in something sort of like perldoc
or ri
. So if you have the same itch, grab it from http://random.zerodogg.org/phpdocr.The app itself, of course, is not written in PHP - it’s written in ruby.
]]>While doing this I found myself missing the old ‘perl -c’ to quickly sanity check code, however naturally that won’t work on Mason - as mason is essentially HTML with inline perl, not the other way around. As such I wrote a quick script that emulates ‘perl -c’ by loading the file using mason inside eval then printing any errors. The script itself is pretty simple, though it doesn’t have any support for printing useful line numbers - but at least it gives an idea of what/where the problem is. The script also declares $c and $m, as at least for Catalyst - those will be available.
]]>My shell is bash, and up until now I used a very simple bash completion for git, but at times I do see myself wanting something a bit more comprehensive. However, I really don’t want bash to be slow to open (of course, the definition of “slow” is quite individual - over a second is way too much ;), which it can be if it needs to load all bash completion definitions when starting. Therefore I wrote a small bash function for my .bashrc that will dynamically load the git bash completion when it first is accessed. Bash starts fast, and I get git bash completion - problem solved (well, the first time git bash completion is used, it of course takes a tad longer than normal because it needs to load it first, but that’s completely livable). As a bonus, it will fall back to my old and simple completion if the proper one is not available.
]]>The date for the Norwegian general election is closing up fast, and I would like to urge my readers to vote for the Socialist Left Party (SV). The last chance to vote before the actual election day is, in many municipalities, tomorrow, the 11th of September. The actual election day is the 14th of September. Remember that every single vote counts!
We don’t need any dark blue experiment with our economy, healthcare or our children. We need a fairer government that works for equality, that takes the climate changes seriously and are prepared to act now rather than later, when it’s too late. We need a country where women earns the same as men, not merely 85% (avg.) of what a man makes and we need to treat everyone with the same amount of respect.
The economic crisis has shown us how bad it can get when we allow as much market freedom as we do. There are a few select people that gamble away at the stock market, keeping any earnings for themselves while socializing the problem if they experience losses. Norway has, largely thanks to SV’s socialist finance minister, managed the financial crisis very well. We need a market that is more controlled, not less! We can not allow capitalists to gamble away our jobs, nor can we allow the parties on the right to privatize our healthcare system, and by doing so throwing that into the same chaos that has caused the crisis we are now in.
If you want more information about SV’s politics, visit http://www.sv.no/ (http://sv.no/Language/English for English), or contact me directly and I will try to answer any questions you have.
For these reasons, and more (see the website), vote SV the 14th of September.
]]>mussort is a simple command-line music sorting program. It recursively processes a directory tree, and then sorts whatever music files it finds there, renaming the files and putting them in a nice directory tree.
0.2 added a load of features designed to make mussort faster. It introduced optional caching of file tags, which has a major impact on the performance on subsequent runs on a directory tree. I optimized away an insane amount of readdir() calls that it kept doing over and over, even though nothing had actually changed since the last readdir(). It also only supported id3info and ogginfo as sources for information, which is problematic because ogginfo is very very slow at times. So in 0.2 it can use the Audio::File perl module if it is available. It provides redundancy (should Audio::File fail for an ogg file, it falls back to ogginfo, should id3info fail for an mp3 file it tries Audio::File) and a large speed increase for ogg vorbis files. It can also use id3v2 if it is available. Because of the caching however, even without Audio::File, any subsequent runs on ogg vorbis files will be a lot faster.
When it comes to actual features, the largest one is support for detection of compilation albums. It will locate an album that contains a lot of different artists and then put those into a single directory named after the album, rather than put them into separate artist/album dirs. For those that don’t want that, it is important to note that the feature is optional and must be explicitly requested (like case-insensitive sorting).
Other than that there’s a bunch of code cleanups, along with minor additions, such as selectable verbosity (–verbose, –quiet) and the option to keep all duplicate files around (–keepdupes). mussort is also now hosted on github, so if you are interested, fork the repository and let’s see what cool stuff you can come up with! Remember to prod me with a pull request so that any nice things you do get included upstream.
]]>The largest new feature is the plugin system. Day Planner now comes with support for plugins, complete with a simple file format that allows users to easily install third party plugins. Its purpose is of course to make it easy for other people to alter the behaviour of Day Planner, or add features to it easily, without having to resort to patching the app itself, but also to make it easier for me to add optional features that perhaps not everyone wants (for instance, 0.10 comes with a tray icon plugin. It is disabled by default, but those that want to use it can do so quite easily). The API is simple, and somewhat inspired by the Gtk2-perl API, to make it feel somewhat familiar for people already used to signal-based programming.
The tarball comes with an example plugin, plugins/HelloWorld.pm that is well commented and explains how to do some of the basic things like hooking into signals, displaying simple dialog boxes and adding events to the calendar. The API itself is documented in DP::CoreModules::Plugin (access the documentation by running perldoc modules/DP-CoreModules/lib/DP/CoreModules/Plugin.pm from the base directory of the Day Planner tarball or git repo).
If you want to write a plugin, and need some help or pointers, feel free to join the Day Planner irc channel, #dayplanner on irc.freenode.net and I’ll be glad to help.
Git
As mentioned earlier, Day Planner is now using git instead of subversion. After I learned git I now greatly prefer it over subversion, and have thus moved all of my projects to it. Information on how to use the Day Planner git repo can be found at http://www.day-planner.org/index.php/development/git:
]]>29.12.08 - Sent from Drammen, Norway
29.12.08 - Arrived at a terminal in Oslo
30.12.08 - Arrived at a terminal in Stavanger
31.12.08 - Arrived at a terminal in Haugesund
02.01.09 - Beep! Still in Haugesund, and the post office realizes that we have moved and that we have bought a service to forward our mail to our new address
02.01.09 - Arrived in Stavanger. Gah! That’s the wrong way!
05.01.09 - Registered at a terminal in Bergen. Yay! It’s getting close!
06.01.09 - Arrived at our local post office in Bergen. Hurray.
…However, they never sent us any packing slip, and we didn’t have the tracking number so we didn’t know.
16.01.09 - Beep! Still in Bergen, and the post office realizes…something and decides to ship it somewhere else.
16.01.09 - Registered at a terminal in Bergen
19.01.09 - Registered at a terminal in Stavanger?!
20.01.09 - Arrived at a post office in Haugesund…again
31.01.09 - Beep! Still in Haugesund, and the post office realizes, once again that we have moved and that we STILL have purchased the service to forward it to our new address.
02.02.09 - Registered at a terminal in Stavanger…yet again
03.02.09 - Registered at a terminal in Bergen
04.02.09 - Arrived at our local post office in Bergen…again. Hurray!
…But they STILL hasn’t sent us any packing slip stating that the package has arrived and that we need to pick it up.We contact the retailer, which contacts the repair shop, which provides the information that it is here! At our local post office!So we go to our local post office, I present my ID and that there’s a package for me, aaand… they can’t find it. We go home and yet again mail the retailer, which contacts the repair shop which then gets hold of the tracking number.Armed with this brand new information, we head to the post office again…
18.02.09 - We get the package.
And no, they still haven’t provided a packing slip.
This is true Norwegian efficiency.
]]>The post is about the LGP community. You can head over to the LGP blog to read it.
]]>The basic syntax will be something like this (any input is welcome):
URL http://...GETMATCH /regex/ or STRINGRUN_CHECKSURL http://POST 'SOME_POSTDATA'MATCH /regex/ or STRING[My regex] MATCH /regex/[String equality] MATCH STRINGRUN_CHECKSRUN_MAIN
Here any URL statement defines a new check, where all previous data is dropped. Each section can have a POST or GET statement, and then any number of MATCH statements, as well as a RUN_CHECKS statement. If any MATCH statement fails (ie. the regex doesn’t match, or the result isn’t equal to STRING) then it will skip the remaining tests and skip ahead to the next URL or RUN_MAIN statement.
MATCH is obvious, it runs a test on the entire content to see if it matches a regex, or equals a string.RUN_CHECKS would start the standard (SDF-based) SWEC checks on the returned data.RUN_MAIN would start the main SWEC mode[SOMETHING] MATCH would create a named match, so the content within [ ] would be the returned error if it doesn’t match, instead of something generic like “failed to match /regex/“.
Other commands I’ve thought of that I might or might not want to do include one to clean the cookiejar, so that tests can be performed on how a page acts when cookies are missing, and a way to add custom skip filters based upon for instance the URL.
THese are just random ideas and plans that I’ve got at the moment, I haven’t begone coding it yet, but it’s definetely something I’m going to do at some point. I’ll welcome any input if you have any.
]]>I wanted a simple way to sanity check a site, to ensure that my article changes didn’t suddenly break comments on images (lagacy apps are strange beasts). So I ended up writing SWEC, the simple web error checker. It’s a basic app that goes through all links in a site (or “webapp”) as long as those are present in the HTML (ie. it doesn’t run any JS, so its use in JS/AJAX/AJAJ-heavy webapps can be somewhat limited). It parses all pages it downloads, looking for known errors and then reports those. For instance, if you run it on a site based on Catalyst (perl) and catalyst crashes with its standard backtrace, SWEC will return which page it happened on, which page referenced it and a quick line about what happened. Ie. if it’s an exception it’ll say “Exception in Catalyst controller”.
It uses a very simple file format for writing tests (which is well documented in SWEC’s manpage). It has several different types of tests, but the most common one looks something like this: [SWEC_CATALYST_CONTROLLER_EXCEPTION] type = regexs check = Caught exception in.Controller.Request.Catalyst error = Exception in Catalyst controller sortindex = 11What’s between the brackets [ ] is the name of the test. All tests that are shiped with SWEC are prefixed with SWEC_.The type defines which “type” of test it is. This one is “regexs” which is a ‘smart’ regex, a standard perl regex that swec modifies during runtime to easier match HTML. The check is in this case a normal perl regex that is applied to the entire html document. As the type is regexs, swec will modify the regex to this during runtime: `Caught(\s+| |<[^>]+>)+exception(\s+| |<[^>]+>)+in.Request.*Catalyst
The error is the string that will be returned, and the sortindex is used for prioritizing tests, the lower the better (bundled tests will always be positive, so one only needs to give tests a negative index to ensure they will be run before bundled ones).
By default the bundled tests (default.sdf
) and the user-specific rc file ~/.swecrc
will be loaded. The user-specific one can disable bundled ones easily, and you can disable them on the command line on an individual basis.
SWEC supports sessions, where SWEC remembers previously checked URLs, and previous errors and can then either check pages that used to have errors before the others, or only report ‘new’ errors that did not exist before. This will also remember all settings that you set so you don’t have to type it every time (although it’ll allow you to do that as well). It has cookie support so it will run just fine as a logged-in user, though you probably don’t want to run it on a live database, but rather a test one, as it’ll click on any link it sees (with a few exceptions, it tries to avoid ‘logout’ and ‘delete’ links, additions to the exceptions list is welcome).
It’s GPLv3, so feel free to hack your own things into it. I’ll accept patches for the app itself, as well as new tests to be bundled. As long as they are either specific to a language, web server or framework, I’ll happily add more bundled checks (or fixes to existing ones), however I will try to avoid app-specific checks as that might just get a bit too much.
Happy hacking
]]>I for one can’t wait for this port to be released.
]]>The winner will be the first person to guess which game it is, based upon the slowly revealing image on http://competition.linuxgamepublishing.com/.
The chance to know what new game is coming out for Linux and the chance to win the first copy of it produced. Yay :).
If you want Linux games already released, head to TuxGames. (Yes that’s my referral link ;)
]]>