tag:encoded.eternicode.com,2013:/posts encoded 2014-06-12T18:07:15Z Andrew Rowls tag:encoded.eternicode.com,2013:Post/627988 2013-12-09T16:00:00Z 2014-06-12T18:07:15Z Creating a UEFI bootable Win7 USB stick from Linux

I run linux-only, and have for several years now.  I've also recently gotten into gaming with Steam.  They've made great progress in making things playable on linux, and most of what's not available specifically on linux works nearly-flawlessly under wine.  I started playing Half-Life 2, which runs pretty great natively on linux; but when I learned about CinematicMod, which is sadly windows-only, I decided I had to make a windows install to try it out.  Lacking an optical drive of any nature on this machine, the natural solution was to install windows from a USB stick.

Note: If you install windows to a partition on a drive that already has linux installed, windows will nuke your grub setup.  Read up on how to restore grub after a windows install.  If you install windows to a separate drive, you'll need to run sudo update-grub2 (or sudo update-grub for legacy grub users) from linux in order to add windows to your grub menu.  Of course, your BIOS will happily boot windows every time if you make your windows drive the first boot choice, so don't forget to make sure your linux drive is at the top of your BIOS's boot list.

Requirements:

  • Existing linux installation
  • 4GB or larger USB stick.  Yes, 4GB is the minimum; the files will take somewhere around 3.7GB.
  • WinUSB; Ubuntu users will find convenient links to .deb files for all the latest Ubuntu versions, as well as an apt repository.  Everyone else has to compile from the source tarfile.
  • Windows 7 installation disc ISO (or physical media and a drive to read it with; a Windows 8 or 8.1 disc will also likely work, but I don't know if Vista or lower will)

Process:

  1. Prepare the USB stick.  Remove all the files you want to keep from the stick, because you will be nuking it -- no, you can't keep your old partition(s), because we have to create a new partition table type (unless this drive already has a gpt partition table, but chances are it's msdos, and I know of no way to easily tell which you already have).  Open it in gparted (sudo gparted /dev/sdx, replacing x as appropriate); go to the Device menu, select Create Partition Table..., open the "advanced" option, choose gpt from the dropdown, and click Apply.  This will nuke all data on the drive, instantly, so be well warned.   Once that's done, create a 4GB or larger FAT16 partition.  Why FAT16?  The UEFI device compatibility specification explicitly requires support for FAT32 for drives and FAT16 or FAT12 for removable media.  If you want to use FAT32, sure, go ahead, but to play it safe use FAT16 or FAT12 instead.  FAT32 worked fine for me.
  2. Use WinUSB to install the ISO.  Once you have WinUSB installed, you'll have two commands, winusb and winusbgui.  Don't use winusbgui.  This command is the easy graphical way to do it, but it will re-nuke your drive and install an NTFS partition, which is probably not supported by your motherboard.  So to keep it FAT, as it were, use winusb's --install command: winusb --install /path/to/win7.iso /dev/sdx1, with the ISO path adjusted as appropriate, and /dev/sdx1 is the FAT partition you created in step 1.  If you get curious and run winusb --help, you'll notice there's also a --format command.  Don't use --format.  This will also create an NTFS partition, and is the same command winusbgui will use.
  3. Grab a coffee, energy drink, OR relaxing beverage of your choice.  I don't recommend combining coffee and energy drinks.  I've known people to go completely jittery and shaky from that mix.  If you do try that, I don't recommend adding other beverages to that mix, either.  But whatever.
  4. Wait.  How long?  That depends.  You're copying 4GB of data over a USB connection.  My machine has a 3.4GHz Core i7, but the USB stick and USB port were both USB 2.0 (I didn't notice a difference between 2.0 and 3.0 ports, but I didn't benchmark it, either), so I ended up waiting about 8 minutes for the process to finish.  Using a USB 3.0 device in a similarly-equipped slot will probably go much faster; a lesser CPU may make it go slower.  Enjoy that coffee (you mixed it with an energy drink, didn't you?  Don't say I didn't warn you...)
  5. Final touches.  Turns out WinUSB isn't quite so thorough with its EFI setup as we would like it to be, so there's one last step.  Mount the FAT partition and copy the efi/microsoft/boot directory to efi/boot -- that is, cp -r /path/to/mount/efi/microsoft/boot /path/to/mount/efi/.
  6. Boot it.  sync your changes, umount the drive, and reboot.  Tell your BIOS to boot from the USB stick, and you should be greeted with the Windows installation wizard.  If not, see this article for a few other things you can try (specifically, see step 11 under Manually Create a Bootable UEFI USB Flash Drive).

And enjoy trying to shoot high-definition antlions and headcrabs while you shake uncontrollably because you mixed coffee with energy drinks.

]]>
Andrew Rowls
tag:encoded.eternicode.com,2013:Post/587790 2013-02-05T20:50:00Z 2013-10-08T17:27:08Z Date math

Some date math I worked out (in javascript) while improving calendar week support for bootstrap-datepicker.

Given weekstart, a weekday on which the week starts (0 for Sunday, 1 for Monday, etc), get the first day of a given date’s week:

var date = new Date();
var start = new Date(+date + (weekstart - date.getDay() - 7) % 7 * 864e5);

Given a date, you want the nearest date in the past that has start.getDay() == weekstart, inclusive of date. For the simple case of a Sunday weekstart (0), this is simply date.getDay() days in the past, or - date.getDay() “in the future”. For the sequential case of Monday (1), it’s one day after that, or - date .getDay() + 1 days “in the future”; this doesn’t work, though, for dates whose getDay is less than weekstart, such as the Sunday before a starting Monda y (this will give the following Monday as the “start of week”, instead of the previous Monday). For these cases, we subtract a week from the change and modulo it by 7 to make sure w e don’t actually go back more than one week; this allows the caculation to go back up to 6 days, but no more.

Last day of current week:

var end = new Date(+start + 6 * 864e5)

Once you have the week start, it’s just a matter of adding 6 days.

Thursday (weekday 4) of this week:

var th = new Date(+start + (4 - start.getDay() + 7) % 7 * 864e5)

Similar to finding the week start, except we now want to travel forward in time from that start. The 4 represents the day of the week we are looking for in the current week (0 for Sunday, etc), and can be changed to any other number to find the corresponding day.

First Thursday of the year, with the year from Thursday of this week:

var yth = new Date(+(yth = new Date(th.getFullYear(), 0, 1)) + (4 - yth.getDay() + 7) % 7 * 864e5)

Starting from January 1st, basically the same logic as finding the Thursday of this week.

Calendar week: milliseconds between the Thursdays we’ve found, divided by milliseconds per day to get days between Thursdays, then divided by 7 days to get number of weeks between Thursdays. This number is then 0-based, so add 1 to get the correct number.

var calWeek =  (th - yth) / 864e5 / 7 + 1;
]]>
Andrew Rowls
tag:encoded.eternicode.com,2013:Post/587793 2012-05-25T20:00:00Z 2013-10-08T17:27:08Z Experimenting with Haystack

As a general principle, I put Whoosh in the same category as SQLite: great for getting started, wonderful for single-user or really small-scale apps, but not suitable for large-scale deployment.

This. The more I poke around, the more I'm convinced this is accurate.

I did some experimenting with Whoosh, Xapian, and Solr the other day, and have compiled the following simple stats. I kept periodically running into a memory wall with Solr (see below), so there are benchmarks from my initial setup with 512MB RAM, as well as from an upgrade to 1GB RAM.

(If you're looking for a setup guide for any of these backends, sorry, this isn't that.)

Some background: I'm the developer for a Q&A site forked from OSQA. This runs django, and we are using django-haystack for search. Based on our indexing rules, we currently have more than 10K indexed questions, growing daily.

$ django-admin.py rebuild_index --noinput
Removing all documents from your index because you said so.
All documents removed.
Indexing 10642 questions.

We're also still ironing out the index schema, so I'm not sure what kind of impact an “optimized schema” will have on any of this.

My expectations were that these would follow a similar progression to SQLite, MySQL, and PostgreSQL (not to bash any of those, of course ;) ). Whoosh would be the worst performer -- everyone uses it because it's the easiest to set up, so it has to suffer somewhere, right? Next would be Xapian: better than whoosh for largish sites, but you don't see many people talking about it, so it must not be much better. Last, and best, would be Solr. In casual haystack-related browsing, you see the most people talking about Solr for big installations. And it's a standalone server app (runs on tomcat or jetty), so it has to be really performant -- right?

Before I tried benchmarking any searches, I did a quick test of how long it took each engine to build our index from scratch (on the original 512MB RAM). Times are the “real” time reported by the time command.

$ time django-admin.py rebuild_index --noinput

Whoosh: 3:03m
Xapian: 2:06m
Solr:   0:18m

No surprises there -- Whoosh is the worst, Xapian is a bit better, and Solr blows them both out of the water.

Next I ran four benchmarks -- single (common) term search, single term sorted search, mutliple (common) term search, and multiple term sorted search. Note that these are full queryset evaluations (using list to pull all the results), sort of worst-case scenario type stuff. We don't actually do this in any real code ;) .

Each engine was setup with the bare minimum of work -- no tweaks, no optimizations, etc. The only difference was changing the length of indexed Question URLs (returned by get_absolute_url) to come in under Xapian's 245-character term limit.

Benchmark setup:

>>> from haystack.query import SearchQuerySet

Benchmarks (512MB RAM, 1GB RAM):

>>> timeit -r5 -n5 list(SearchQuerySet().auto_query('term'))
Whoosh: 899ms, 922ms
Xapian: 597ms, 577ms
Solr:   3.55s, 1.22s

>>> timeit -r5 -n5 list(SearchQuerySet().auto_query('term').order_by('added_at'))
Whoosh: 5.72s, 5.99s
Xapian: 613ms, 557ms
Solr:   1.17s, 1.22s

>>> timeit -r5 -n5 list(SearchQuerySet().auto_query('three term phrase'))
Whoosh: 899ms, 853ms
Xapian: 210ms, 200ms
Solr:   2.24s, 481ms

>>> timeit -r5 -n5 list(SearchQuerySet().auto_query('three term phrase').order_by('added_at'))
Whoosh: 6.32s, 6.1s
Xapian: 196ms, 199ms
Solr:   1.32s, 492ms

Whoosh performed pretty bad, which was not surprising; it was exponentially worse for sorted searches, though I couldn't tell you why. Solr (which I previously saw as a sort of search analog to Redis) obviously benefits from more RAM, but still has pretty lousy times for short queries. But then Xapian comes along with really good times for all benchmarks. Who'd have thought!

Xapian's triumph was surprising, but at the same time welcome. Xapian is similar to Whoosh in terms of setup (that is, very easy), and we wanted to avoid adding another server app to our deployment setup, which Solr would have required. So we'll be switching to Xapian for our search, and should see a pretty good performance boost as a result.

]]>
Andrew Rowls
tag:encoded.eternicode.com,2013:Post/587796 2012-05-11T20:00:00Z 2013-10-08T17:27:08Z HEY! Are you SURE you wanted to leave?

It's a de facto standard these days: if you have a form that may take the user a long time to fill out, you ask them if they meant to leave before actually letting them go.

Needless to say, this is ugly, obtrusive, and all-around undesirable.

I had a need for such functionality, but didn't like the method that I was most familiar with (some trickery involving window.onunload or window.onbeforeunload). So I began searching to see what the latest "best practices" on the topic were.

I should've known better.

The SO question that I came across, Best way to ask confirmation from user before leaving the page?, was answered by the resounding pleas of developers everywhere to not implement such a thing. Overall, I felt the responses were mostly specific to the asker's use case (preventing the user from leaving a registration form unintentionally), but then I read Oli's answer, the topic and gist of which was:

Never prevent the user, only help them.

So the question is not "How to prevent the user from accidentally leaving the page", but "How to help the user if they accidentally leave the page". And whole new thought processes opened up.

What are we helping them with? Since the use case is a form that takes a while to fill out -- or where changes are generally difficult to reproduce -- we primarily want to help them recover those changes.

Now, how do we do that?

For most forms, this is as simple as serializing the form's contents to a cookie or backend session periodically while the user is on the page. Don't worry about them leaving the page. If their data is saved, just load it back in when they come back -- which will likely be a few seconds later, if they did leave accidentally. (Incidentally, Oli goes on to describe this method in his answer).

What if space might be an issue, or if saving the whole form is a bad idea (say, I don't know, a collaborative novel-writing app)? There are plenty of diff algorithms out there. Some are already implemented in javascript. Simply store the original locally and save out a diff perodically. Then make an honest attempt to merge the diff back into the current version when they come back from "accidentally leaving." Heck, at this point, you might as well do live-editing Google Docs style.

However you do it, the data doesn't need to be stored permanently: if the user leaves the page long enough for the cookie/session to expire, chances are they're not coming back to that content. And a "Wait, don't go!" dialog isn't going to prevent them from leaving, either.

Or, as Oli put it:

If they don't come back and their cookie/session expire, that's their fault. You can only lead a horse to water. It's not your job to force it to drink.

]]>
Andrew Rowls
tag:encoded.eternicode.com,2013:Post/587799 2012-02-10T20:40:00Z 2013-10-08T17:27:08Z CSS Vendor Prefixes

The twitterverse and style-o-sphere are abuzz with trepidation over the recent news that browser vendors will be adding support for -webkit- prefixes.

They're already past the "should we?" phase. They've moved one to "which ones?".

The current solution, apparently, is for everyone to start evangelizing "offending websites" -- ie, those websites that use -webkit- prefixes only, or are otherwise only implementing some browsers' prefixed properties, and not others'. Problem is, this really needs goes farther than just checking the prefixes; even as far as testing mobile site versions on more browsers and devices. Easier said than done, sure, but as noted in the CSSWG minutes, this is the primary concern:

tantek: Sites have webkit-specific content, and serve backup content to everyone else. Specifically for mobile content.

Florian: Regardless of how we ended up here, if we [Opera] don't support webkit prefixes, we are locking ourselves out of parts of the mobile web.

Tab: ... the discussion is about webkit being a monoculture on Mobile ...

So the problem is that many developers use only -webkit- on their mobile sites, whether due to ignorance or lack of ability to test on mobile browsers other than iOS Safari. This results in websites that "work best in WebKit", and fallback to ugly -- or at the worst, broken and unusable -- in other browsers. As PPK put it:

WebKit is the new IE6, where "WebKit" now means the iPhone. I agree. In fact, I called out this problem two years ago. Gloat moment. I. Told. You. So.

So now we're going out, bugging developers to add more prefixes to their sites -- "putting the full brunt of namespace management on web developers". Treating the symptom. The cause is that the spec is broken.

What's important to note here is that the problem is with the prefixes, and not with web developers. If standards make web developers' lives much harder, web developers ignore standards. This was true back in 1998, it's still true today.

Prefixes do make developers' lives much harder. Should I use -o-text-shadow, or does Opera (or will Opera) even support text-shadow yet? Was it -moz-border-radius-topleft or -moz-top-left-border-radius? IE had some funky syntax for gradients -- right? Yes, there are tools that can help us with this, or even do it for us, but why should we depend on tools where simple brain-knowledge ought to suffice?

The best solution I've heard of so far is to drop vendor prefixes (existing ones will have continued support) and adopt cross-vendor -alpha- and -beta- prefixes.

Web developers who want to use an experimental feature just add -beta-coolfeature to their CSS, and they're done forever - unless the feature changes (but they run that risk nowadays, too).

One opposition to this idea was voiced by Stuart Langridge:

This is wrong. We already have a solution for dealing with situations like this:

.gradient-backgrond {
    background-image: -khtml-gradient(linear, left top, left bottom, from(#fff), to(#000));
    background-image: -moz-linear-gradient(top, #fff, #000);
    background-image: -ms-linear-gradient(top, #fff, #000);
    background-image: -webkit-gradient(linear, left top, left bottom, color-stop(0%, #fff), color-stop(100%, #000));
    background-image: -webkit-linear-gradient(top, #fff, #000);
    background-image: -o-linear-gradient(top, #fff, #000);
    background-image: linear-gradient(top, #fff, #000);
    filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#fff', endColorstr='#000', GradientType=0);
}

Note that there are even two syntaxes for WebKit! If the browser doesn't understand a value, it's simply ignored. If it understands multiple values (usually just two - prefixed and unprefixed), the last understood value in the list "wins". And of course some IEs don't understand any syntax, and require a separate rule that other browsers simply ignore. This is how prefixing works.

In the case of browsers implementing different syntaxes for an -alpha- feature, we would do this:

.coolness {
    -alpha-cool: 1.0; /* Opera */
    -alpha-cool: 100%; /* webkit, gecko, IE */
    -alpha-cool: 255; /* khtml (say what, now?) */
    -beta-cool: 100%;
    cool: 100%;
}

Yes, there are still multiple values for the -alpha- prefixed rule, but the number of values is limited by the number of ways to express a value or the number of vendors that implement the feature, whichever is fewer. The chances of the first vendor's syntax (let's say WebKit implemented the -alpha- first) working in other browsers is increased from "zero" (-webkit- vs -moz-) to "pretty good" (both use -alpha-, and will probably use the same syntax for most rules). When the -beta- comes out, it's started into the standardization process, and should have even fewer valid values. When the unprefixed version becomes the standard, there is one version that "just works", and overrides the -alpha- and -beta- implementations as it does now.

If you're worried about syntax changes in the -alpha- stage, don't be. By definition, alpha is not fit for public consumption.

Felipe G's @-vendor-unlock proposal is interesting, but still relies too much on engine namespacing. This is the same pain we went through with going from UA-sniffing to feature detection -- we should care more about what the engine can do, rather than which specific engine it is.

Better yet, we should trust the engine to know what it can do, rather than what we think we know it can do.

]]>
Andrew Rowls
tag:encoded.eternicode.com,2013:Post/587806 2011-11-10T19:05:27Z 2013-10-08T17:27:08Z Advanced listcomps

A week ago, @alex_gaynor tweeted a quick listcomp for flattening a two-dimensional matrix (list of lists):

At the time, I thought it was cool, but I didn’t really “get” the “how”. But now I do.

Say we have the following matrix:

matrix = [
    [1, 2, 3],
    [4, 5, 6],
    [7, 8, 9],
]

The flattening looks like this:

>>> [x for y in matrix for x in y]
[1, 2, 3, 4, 5, 6, 7, 8, 9]

PEP 202 states:

  • The form [… for x… for y…] nests, with the last index varying fastest, just like nested for loops.

Most list comps take the form a = [b(c) for c in d if e]. If each level of logic nests, this is equivalent to:

a = []
for c in d:
    if e:
        a.append(b(c))

So, following this logic, multiple for ... clauses in the matrix-flattening are simply nested loops:

matrix = [
    [1, 2, 3],
    [4, 5, 6],
    [7, 8, 9],
]

flattened = [x for y in matrix for x in y]

flattened = []
for y in matrix:
    for x in y:
        flattened.append(x)
]]>
Andrew Rowls