America’s core ideal: the echo chamber

America is becoming increasingly polarized.

Right-wing discourse consists of endless hand-wringing about how the left consists of “special snowflakes” that need “safe spaces” (or in harsher terms, “echo chambers”) to function, and that left-wing views won’t stand up to the scrutiny of “the real world”. Left-wing commentators counter by pointing out the right’s tendency to ban the official use of terminology that runs counter to their viewpoints, and to exclude minorities from their spaces – correctly pointing out that this is the right instituting their own “safe spaces” and “echo chambers”. As America divides, it’s becoming increasingly apparent that even our core values are drifting apart.

I’d like to posit that this is because we don’t really have the core values that our founding mythology says we do. Instead, we just have one core ideal, the one that our country was really founded on: the very same echo chambers that we endlessly argue about. We want supremacy of the views that we hold, without being challenged by competing views, as a deep-seated cultural ideal.
Continue reading

The case for both simplifying LMP1-H, and making it more complex

I feel like there’s room for a follow up to my previous piece on the Le Mans Prototype 1-Hybrid subclass, now that there’s been additional news, including the news that Porsche is terminating their LMP1 program after this season is over.

Those that see hybrid technology as being bad for LMP1 see this as vindication of their ideas, now that the best reasonable case for 2018 is that we get two or three Toyotas in the top class at Le Mans. I don’t think they’re right… but there’s some interesting points that they bring up.  Continue reading

My thoughts on the future of LMP1-H, after the 2017 24 Hours of Le Mans

It’s been a day since the 24 Hours of Le Mans, and I’ve finally put my thoughts together on it. It was something else, nothing like I’ve ever seen before.

Before the 24 Hours of Le Mans, predictions were that Toyota would walk away with the race on both pace and reliability. Predictions were that the all-new and huge LMP2 field would have failures en masse, having shown mediocre reliability in the lead-up to Le Mans. And, fears were that the GTE-Pro field would have poor balance of performance, allowing someone to run away with it. Continue reading

Flashing linear flash cards – quick notes

I recently bought an HP OmniBook 430 that needed a system ROM card, which has to be on a linear flash card.

Creating such a card, however, was a bit of an ordeal, so I’ll quickly document what exactly I did.

Ultimately, I used a ThinkPad 365XD, but the card I used would’ve been usable on most any laptop with PCMCIA. (I tried a Fujitsu Lifebook P1620, as well, but had trouble with finding sufficiently old Linux that worked with its hardware… although now I think I could have used something more modern, now that I know what I was doing wrong.) I used a Viking Technology 24 MiB card, that was an Intel Value Series 200 card. This is a 3.3/5 volt read, 3.3/5 volt program card, using Intel StrataFlash, or what we’d now refer to as MLC flash. Complicating matters, this card did not have attribute memory, and was completely blank, meaning that many tools couldn’t automatically figure out what it was. (I’m still not sure if it’ll actually work in the target device, partially because of being Value Series 200 – if not, I’ve ordered a Series 2+ card, which is likely a better choice anyway – but my goal here was to successfully program it, and that’s definitely been done.) And, for software, I used Damn Small Linux 4.4.10, plus the DSL MTD kernel modules from here.

In a root terminal, I issued the following commands, after saving mtd-modules.tgz in /home/dsl/.

cd /
tar -xzf mtd-modules.tgz
modprobe pcmciamtd buswidth=2 force_size=24 vpp=50 setvpp=1
modprobe mtdchar
rm /etc/pcmcia/config
cp /KNOPPIX/etc/pcmcia/config /etc/pcmcia/config
nano /etc/pcmcia/config

(OK, in reality, I issued a bunch of other commands, but those are the ones that were actually important, I think.)

At this point, I added two lines near the beginning of the file:

device "pcmciamtd"
  class "memory" module "pcmciamtd"

Then, I changed the binding of Anonymous Memory from memory_cs to pcmciamtd, later in the file.

After saving that, killing cardmgr, and relaunching it, I found that /proc/mtd listed the device, and dmesg showed that it was configured (and that CFI had successfully figured out what kind of flash chip it was working with)… but there was no device node. Apparently MTD didn’t actually make nodes automatically, back then, so a quick mknod /dev/mtd0 c 90 0 took care of it. At that point, I could simply cat the firmware image to /dev/mtd0, and it flashed it successfully, which could be verified by ejecting the card, reinserting it, recreating the device node, and running cat on that device node.

Hopefully, if someone needs this, it’s available as a reference now.

Dieselgate and CO2 emissions

In case you’ve been living under a rock, Volkswagen was caught cheating on emissions testing with their 2009-2016 diesels. Recently, a proposed settlement including a buyback program (as well as, potentially, a fix) has been announced for the 2009-2015 2.0 liter vehicles. One concern, however, is the CO2 emissions impact of this buyback – both in terms of manufacturing emissions, and in terms of fuel consumption.

Continue reading

MPG is bullshit

That’s right, MPG is bullshit (and along with it, MPGe).

I don’t mean that the emphasis on improving fuel efficiency in personal transportation is bullshit, to the contrary, it’s one of the most important things we can do today. Similarly, I see the value in representing the efficiency of non-gasoline vehicles in a way that translates to gasoline units of energy – it helps put things into context for an efficiency-minded buyer.

The problem with MPG is that it, as a metric for measuring vehicle efficiency, is terrible at representing that efficiency in an intuitive way, and as a result, has discouraged improvements in efficiency in the vehicles that need it the most, and caused efforts to be directed to vehicles that need it less. To illustrate this, I’ll create a scenario.

You have two older vehicles that you drive about equally, one that gets 15 MPG (probably a full-size pickup or a large SUV), and one that gets 30 MPG (probably a compact car). You’ve got the funds to replace one vehicle, and you want to get something more fuel efficient, without losing capability – so you’re looking at either another full-size pickup or SUV, or another compact car. You look at the vehicles that are available, and see that you can get a full-size truck or large SUV that gets about 20 MPG. Alternately, you can get a car that gets 50 MPG nowadays, and it’s even a fair bit bigger than a compact. Which should you buy, to reduce your fuel consumption by the most?

Intuitively, you’d get the car – it gets 20 MPG better than your car, and it’s 67% better than your existing car on MPG. The truck only gets 5 MPG better, and only 33% better MPG.

And this gets into why MPG is bullshit – MPG determines how far you go on a fixed amount of fuel, and you’re not driving for a fixed amount of gallons, you’re driving a fixed amount of miles. In Europe, the standard (at least outside of the UK, anyway) is to report fuel economy in terms of liters per 100 kilometers. It answers the question of how much fuel it takes to go a fixed distance, instead of how far you can go on fixed fuel. Metric system issues aside, I’ll illustrate how this is a superior system for representing efficiency, using gallons per 100 miles – the familiar units in the US.

Under the gallons per 100 miles system, your truck is now rated for 6.67 gal/100 mi, and your car is now rated for 3.33 gal/100 mi – the conversion is merely 100 divided by the MPG. And, the new truck and new car are rated for 5 gal/100 mi and 2 gal/100 mi respectively. So, in 100 miles, the new truck uses 1.67 gallons less fuel over 100 miles, whereas the car only uses 1.33 gallons less fuel over the same distance. Upgrading the truck reduces your fuel consumption more than upgrading the car, even though the intuitive ways of looking at MPG (numeric or even percentage improvements in MPG) make it look like the car is the better option.

Ultimately, because MPG as a measurement is relatively insensitive to even large improvements in efficiency in inefficient vehicles, while magnifying minor improvements in efficiency in already efficient vehicles, it’s arguably hurt the American automotive marketplace. The American automotive market is one that buys plenty of large, inefficient vehicles for various reasons, and in those vehicles, if a consumer sees a “mere” 1 or 2 MPG difference between two models, they may be less inclined to take a more efficient option, even though it would save a significant amount of fuel. Conversely, consumers may prioritize replacing already efficient vehicles with vehicles that are only slightly more efficient, because of a large difference in MPG.

It’s worth noting that on, the US government’s website for information on vehicle fuel economy, in addition to the MPG (or, for electric vehicles, the MPGe) figures, they list gal/100 mi and kWh/100 mi figures (in smaller print, however, leading with MPG or MPGe), as well as allowing users of the site to have figures displayed in either gal/100 mi or l/100 km. I applaud them for this much, but I’d personally like to see MPG abolished altogether, in favor of reporting efficiency in gal/100 mi (l/100 km is just asking too much, especially because fuel’s sold in gallons and distance is measured in miles in this country) as the primary method of reporting for liquid fueled vehicles, as well as on the Monroney sticker that’s on all new cars.

Windows 10 DPI scaling and window positioning issues on laptops

If you’re using a Windows 10 laptop at anything other than the default scaling factor for your display, you may encounter an issue where closing the lid causes your window positions and sizes to be forgotten. I discovered this on my MacBook Pro Retina, which I run at 100% scaling (the default is 200%). This may also apply on Windows 8.1, but I haven’t tried it on this hardware.

In Windows 8.1, Microsoft introduced a new model of handling DPI scaling, that allowed different monitors to have different scaling factors. This is useful for situations where you’re running multiple displays of vastly different density, as it’ll make applications appear roughly the same size on all monitors.

However, at least in Windows 10, there’s a problem with this. Windows detects displays when they’re attached, and determines a proper default DPI and applies it. Once that is done, it applies your preferred scaling factor to the display. The problem with this is that closing the lid on a laptop effectively detaches the display, and opening it back up causes redetection. There’ll be no problem if you’re happy with the default scaling factor, but if you’re running a non-default scaling factor, this can cause huge problems. Apps can get stuck in the old scaling factor, windows will be rearranged, and windows will be resized. A workaround for this problem (if you don’t need different DPI for each monitor) is to disable Windows 8.1 DPI scaling, which on 8.1 could have been done by checking the “Let me choose one scaling level for all my displays” checkbox. That checkbox, however, is no longer available in Windows 10, but the registry key that it changed is still available, at HKCU\Control Panel\Desktop\Win8DpiScaling. Change it from 0 to 1, reboot, and now all displays should have the same DPI, and closing your lid won’t change your window layout.

Obviously, there’s an underlying bug in display detection that Microsoft needs to fix (as the feature that I’ve disabled actually is a useful one). However, for my case, where I’m happy with all displays running at the same scaling factor, the old way of multi-monitor scaling (as used in Windows 98 (the first version with official multi-monitor support) through 8.0) works perfectly fine.