Author Archive
Linux on Sager P8950 / Clevo P950HP6
by TrueJournals on Jun.04, 2017, under hardware, technology
Writing this post in the hopes that someone else wants to run Linux on this laptop, and doesn’t have to go through the same struggle I’ve gone through.
This post is accurate as of 2017-06-04 — I can’t guarantee that I’ll update it in the future. If more than 6 months has passed, the information here is likely outdated. Where possible, I’ll try to list sources that should contain up-to-date information.
I’ll try to organize this chronologically with what you’ll need to do to get everything up and running. Note that this sometimes involves going back and changing things.
All of this is written for Fedora 26, since that’s what I’m using at the moment. It should basically apply to other distributions, but you may have to tweak some things. That’s left as an exercise to the reader 🙂
First boot stability: dealing with Optimus
This laptop contains nvidia Optimus technology, which allows optionally using an nvidia graphics card for certain applications and powering it down when not necessary to save power (it’s actually unclear to me if this is “really” Optimus, or actually “MSHYBRID graphics”, but they seem interchangeable for our purposes). Unfortunately, Optimus seems to introduce a lot of weird interactions with ACPI, which can cause instability with how Linux advertises compatibility by default.
So, let’s fix that (otherwise, you’ll get random system crashes). You’ll need to add acpi_osi=’!Windows 2015′ to your kernel command line for this. So, edit /etc/default/grub and add that somewhere inside GRUB_CMDLINE_LINUX. It’ll probably look something like this:
GRUB_CMDLINE_LINUX="rd.lvm.lv=fedora/root rd.lvm.lv=fedora/swap rhgb quiet acpi_osi='!Windows 2015'"
This gets us to a stable system which can turn off the nvidia card (you’re switching using prime). Unfortunately, we need a newer kernel if you want nouveau to actually work (support for GeForce GTX 1000 series lands in 4.12, Fedora at the moment contains 4.11.3). I’ve built 4.12 for myself (it’s easy — see Fedora’s instructions, or use Fedora Vanilla Kernels)
Getting Audio working
I’m working on sending this information upstream to get things working even better here — I’ll try to add a link to the bug report / more information when it’s available.
You may now notice that your audio doesn’t work. The below appears to work with Kernels >= 4.11. You’ll need at least 4.11 for Realtek ALC1220 support.
The default ALSA model doesn’t seem to work with this laptop, so you’ll need to change that. I’ve used two models that work: no-primary-hp and dual-codecs. I’d recommend no-primary-hp, since it seems to work slightly better.
To change your model, create /etc/modprobe.d/alsa.conf with the following:
options snd-hda-intel model=no-primary-hp power_save=1
(Note that I’ve also enabled power saving here, which should also help with battery life)
Reboot and you’ll notice… sound doesn’t quite work yet. There’s one more quirk here. By default, the laptop boots with the “headphone” audio control muted. Unfortunately, this is the control that really sets the volume of the speakers! So, each boot you’ll need to unmute the headphone volume and turn it up (I’ve been so far too lazy to investigate how to do this automatically). You can run this command to set everything up properly:
amixer -c 0 sset Headphone 100% unmute
Nvidia Performance: Switch to Proprietary Drivers
NOTE: External monitors will not work in this configuration! If you need external monitors, you’ll want to skip this step, but you’ll lose some performance on the nvidia GPU. It’s unfortunate that this trade-off exists at the moment 🙁
Next, switch to the proprietary nvidia driver to get better battery life, since I didn’t have good luck with the open source drivers. To do this, you’ll want to install Bumblebee. Since I’m using Fedora, I just followed the instructions on the Fedora wiki:
sudo dnf -y --nogpgcheck install http://install.linux.ncsu.edu/pub/yum/itecs/public/bumblebee/fedora$(rpm -E %fedora)/noarch/bumblebee-release-1.2-1.noarch.rpm sudo dnf -y --nogpgcheck install http://install.linux.ncsu.edu/pub/yum/itecs/public/bumblebee-nonfree/fedora$(rpm -E %fedora)/noarch/bumblebee-nonfree-release-1.2-1.noarch.rpm sudo dnf install bumblebee-nvidia bbswitch-dkms VirtualGL.x86_64 VirtualGL.i686 primus.x86_64 primus.i686 kernel-devel
Reboot and bumblebee should be working. You can use “primusrun” to run things on the nvidia card.
Fancy Keyboard/Fan Control
Last step is being able to configure the backlight, and fan control (if you want). For that, you’ll need the clevo-xsm-wmi kernel module. For simplicity, I have a fork which adds recognition of this laptop. Pull that down, and compile the kernel module:
git clone https://github.com/TrueJournals/clevo-xsm-wmi.git cd clevo-xsm-wmi ./install
This will create a dkms module for the driver so it’ll be automatically re-installed with any kernel updates. With this, the keyboard backlight keys should work as expected (so you can turn the backlight off if you want, or change brightness). There’s also a little GUI utility if you want to set custom backlight settings, but I’m not going to get into that for this blog post (I just want the shortcuts to work so I can turn the backlight off or lower 🙂 )
Outstanding Issues
That should basically do it — you should have a laptop with decent battery life, working sound and good graphics (use primusrun to use the nvidia card, or DRI_PRIME=1/GNOME “dedicated” option) under Linux!
Of course, there are still some outstanding issues I’ve been fighting…
- primusrun seems to cause a crash occasionally. Recently, I somehow put myself in a sate where I can’t oepn steam without the whole OS crashing
- primusrun has some trouble with certain Steam games (having trouble with Portal 2 specifically, but haven’t don’t much testing)
- You’ll need to modify each game in steam for primusrun by modifying it’s parameters to “primusrun %command%”
- You can experiment with running steam itself under primusrun, but I’ve had trouble with this
- For open source drivers, launch steam on dedicated graphics card (or change parameters to DRI_PRIME=1 %command% ? — haven’t played around with that)
- External monitors don’t work with bumblebee
- It’s my understanding that external monitors can work with bumblebee, but that basically requires keeping the nvidia GPU on all the time. Personally, I’m sticking with the open source drivers, at least for now. I know Fedora is working on improving the whole situation here, so hopefully things will get better in the future.
Links for up-to-date information
- ArchWiki, NVIDIA Optimus
- Fedora wiki, bumblebee
- Information about nouveau/PRIME
- clevo-xsm-wmi upstream (Linked above is my fork with support for this model and fan control. Working on getting model support upstream, but fan control was added by someone else)
If you’re writing C++, write C++!
by TrueJournals on May.06, 2014, under programming
Today, I came across some C++ that looked something like this:
std::string output = someFunctionCall(); if(strlen(output.c_str()) > OUTPUT_LIMIT) { // Trim output }
Now… I get it. C++ is not an easy language. C++ can be especially difficult if you’re coming from C and you’re not used to object-oriented programming, or if you already know how to do what you want in C. But please, use the right tool for the job.
The code I came across wasn’t quite this simple (the string did not come from directly above the if statement, but the if statement is exactly the same, differing only in variable names). I’m guessing that, at some point, this code was written in C before the project moved to C++. However — someone, somewhere, at some point came across this if statement when refactoring, and decided that the right thing to do was to keep the “strlen” call and keep using C-style strings.
That means this person didn’t think “I wonder how I get the length of a string in C++”. The person who refactored this line of code did not care what the intention was — they just cared to replicate EXACTLY what it was doing.
To make matters worse, someone decided that “output” should use NULL characters as a field delimiter of sorts. I’m guessing that, in the original C code, these delimiters were properly removed. However, this did not happen in the C++ refactor. There was a NULL character about 20 characters into output, which meant that the call to strlen would always return 20. It’s nice that C++ strings allow NULL characters — it’s bad when you then try to treat them as C strings.
Now, I won’t comment on the validity of using NULL characters as a field delimiter (it’s a bad idea… please, just don’t do it), but whoever decided that this line of code should notbehave in the same way but insteadexecute the same calculation was just wrong.
Refactoring code is hard (especially if you didn’t write it yourself). Programming is hard. But, please, if you’re modifying someone else’s code, try to understand the intention of the code more than just the literal function calls. The author of the original code clearly meant to execute the trimming code if the length of “output” was greater than the limit, and output.size() is the best way to check for that.
Let the Language Work for You – Part 1: C++ Destructors
by TrueJournals on Oct.10, 2013, under programming
I recently came across a programming design pattern I hadn’t thought of, and that inspired me to write at least one blog post. So, I introduce this post which may or may not start a series of posts, entitled “Letting the Language Work for You”.
The idea behind this series is this: programming languages have become very advanced, possibly more than most people realize. There are many things you may be doing now in your programs that are simply extra work that you don’t really need to do. Instead, you should let the language do the work for you. These posts will (hopefully) help you take full advantage of your programming language to avoid extra work, and hopefully keep out some bugs.
For part 1, let’s look at destructors in C++. Before we get started, I’ll note that this post is very much inspired by Boost’s ScopedLock pattern. If you know where this is headed just by reading that, I’ll understand if you decide you don’t need to read what I have to say. Otherwise, if you use an object oriented language, I’d encourage you to keep going.
Inside the Acer Aspire M5-583P
by TrueJournals on Oct.05, 2013, under hardware, life
I recently got a new laptop. After a lot of searching, I decided on the Acer Aspire M5-583P. Overall, it’s a really awesome laptop, and I just couldn’t find anything that came close to the specs for the price ($650 when I got mine).
There was one thing, however, that I was a bit disappointed in: the lack of an SSD cache. I was curious how easy it would be to replace the HDD, but I didn’t want to open the laptop and void my warranty right away, so I started looking online for pictures of the internals. Unfortunately, there seems to be nothing available.
So, I finally broke down and opened the laptop myself. The laptop is super simple to open: just some screws that are easily visible on the back. After that, you just have to pry the black bottom piece off. That part is a bit challenging, since it’s protected by little plastic clips, but you can just gently tug at it a bit and it pops right off.
So, I present here… some pictures! (continue reading…)
My First Open Source Library: libptp++
by TrueJournals on Mar.03, 2013, under programming
I’ve been working on it for a while, but since I’ve made the code public, it’s time for an official announcement: libptp++ is ready!
When working on my project for senior design, part of the challenge was being able to communicate with a Canon camera, and be able to send commands to CHDK. There’s an excellent python librarythat does the job, but running on the Raspberry Pi, it was really slow. After doing some more benchmarking and testing, I determined that Python was just going to be slow on the Raspberry Pi, and it was time to move on to a different language.
I decided to pick C++ for a few reasons:
- It’s just about as fast as you can get. I’m sure there are people who will say that it’s not quite as fast as C, but it’s going to be pretty close. Having a compiled language is much faster than any interpreted language.
- It’s object-oriented. I felt that pyptp2 had a great API, and was already accustomed to using it. With an object-oriented language, I could simply implement this existing API. I also feel that objects provide great assistance to the programmer.
- I wanted to. I’ve been wanting to get into C++ and see what the syntax is like and what it’s capable of, but haven’t had a good chance to. This provided me thisopportunity.
So, I started work on building a PTP library in C++. I’ll note here that there IS a PTP library in C. However, I had a lot of trouble figuring out how to use this library (it’s heavily tied in to the ptpcam program it comes with), and all CHDK extensions I could find were poorly documented, and, again, tied heavily to some other program.
Having said all that, I’m very proud with where libptp++ is. It’s as functionally complete as I need it for my senior design project, but I’m still working on expanding it. The CHDKCamera class it provides is mostly complete, but the PTPCamera (for camera’s that don’t or can’t run CHDK) is unimplemented. Going forward, I’ll need to look into what functionality that class will need to provide.
I’ll also note that I did my best to document everything using Doxygen strings. One of my complaints about libptp2 is that it’s poorly documented, so I don’t want to run into the same trap. Run libptp++ through Doxygen and you’ll get some wonderful documentation. There’s even an example included in there!
I’m releasing this code under the LGPLv3. Basically, this means (to my understanding) that you can use it however you want, as long as you release the source code. Additionally, the LGPL allows this library to be included in commercial products. Again, though, the source code must be provided.
I hope someone else can get some use out of this library! It’s certainly not perfect, but its current state is a good start. I encourage anyone that finds a bug to file an issue on GitHub, or fork the project, fix the bug and send in a pull request!
Google Drive Form Report Generation
by TrueJournals on Jan.22, 2013, under programming
Printing form responses in Google Drive is an awful experience.
That’s a strong statement, but it’s the truth. Forms in Google Drive are used for so much more than simply numerical responses, with some being used for surveys, sign ups, and other data that just doesn’t fit properly in a spreadsheet. Yet, that’s the only format Google presents this data in. While spreadsheets are fantastic for numerical data analysis, they do little for analysis of open-ended survey responses.
And this statement even makes the assumption that you’re attempting to read the data off a computer screen. Want to print out the data to keep a hard copy (or just to reduce eye strain from your computer’sback light)? Forget it! You can attempt to print the responses in landscape, but you’ll end up with an unorganized mess of papers which doesn’t really solve the original problem.
So, how can we get the data from the Form into a format that’s pleasant to read, and easy to print? We use a script!
The Google Apps scripting engine is amazing. I’ve previously used it to create rudimentary conditional formatting in Google Docs. Now, I wanted to play around with it more and generate single page(-ish) reports of individual responses to a Google Drive Form. I say “-ish”, because the script doesn’t guarantee a single page. The end length is simply based on the response length and number of questions.
So, how do you use this script? Just find it in the script gallery! Searching for “Report Generator” should find the script and allow you to install it easily! Once installed, a new menu — “Report Generation” will pop up on your spreadsheet. Simply select the row you want to generate a report for, and choose “Generate Single Report” (or generate all at once!). In a few moments, you’ll have a new folder in your document list, and a doc and/or PDF of the report.
The script has a few configurable options, which are for the most part self-explanatory. I’m working on getting some documentation up at the script’s help page, but that may take me some time. I also plan on adding more features to the script: the ability to change the folder name, change the resulting document names, change the formatting of the output, etc. I’ve never published a script to the gallery before, but I’m hoping I’ll just be able to update the script when I have new features ready.
So, if you’re looking for a good solution to printing out responses to Google Drive Forms, check out “Report Generator” in the script gallery!
“I’m always worried I’m going to blow something up”
by TrueJournals on Aug.07, 2012, under technology, thoughts
For the past two summers, I’ve worked at a level 1 tech support job at my college. For those unfamiliar with this jargon, this means that I’m the person who sits at the help desk, and attempts to help out anyone who comes into the desk, or calls the phone, or e-mails us. Technically speaking, it’s a pretty simple job, as we do some basic troubleshooting, then bump any issues up to level 2 if we determine we need to fix something in person.
Today, I was on the phone with someone, and they inadvertently reminded me of the difference between “technical” people and “non-technical” people.
The phone call was simple: Gmail brought up a page reporting that the person’s browser was out of date because they were using Internet Explorer. I suggested that he install Chrome, as it is a faster and more secure web browser, and now our “officially supported” browser, and proceeded to walk him through the installation.
And when I say walk him through the installation, I mean walk him through every step of the installation. Every button he should click, when he should click them, etc. During this process, he explained to me why he was asking for my help at every step of the process:
I’m always worried I’m going to blow something up.
So, this is the difference between “tech” people and “non-tech” people: the non-tech people think that if they click a wrong button, their computer will either explode, or stop working permanently. The tech people have opened their computer to find that no explosives are contained within.
One of the things I enjoy doing is breaking my operating system. I use Ubuntu Linux as my main operating system, and regularly do stupid things that might be considered destructive. But that’s OK — I know that I can always get my data, and I enjoy going through the processes of figuring out what went wrong, and how to fix it.
In a word, I’m fearless when fixing computers: I know that if I click the wrong button, there’s always a way to go back or undo whatever I just did. Sometimes, this is as simple as clicking a “back” or “undo” button. Sometimes it’s a bit more involved, but that’s OK! Because it means I’m going to learn what doesn’t work, and possibly learn something more about my operating system while I attempt to fix what’s broken.
If there’s one piece of advice I can give to non-tech people, it’s this: don’t be afraid to click buttons. Seriously. Most buttons you press are not going to permanently destroy your software. If you’re worried about losing data, make a backup of whatever file you’re working, then click buttons to your hearts content!
Google’s an ISP — Now what?
by TrueJournals on Jul.27, 2012, under thoughts
As most people have probably heard by now, Google is a certified ISP. Google will be providing Kansas City residents with a 1 Gbps fiber-to-the-home connection for $70/month. Add TV to that service for an extra $50/month, and you get the most awesome Internet and TV bundle for $120/month. Can’t afford that? Google’s even offering a 5 Mbps connection for free (after a $300 installation fee). All of this makes Google possibly the best ISP in the nation.
But, as some have pointed out, Google isn’t doing so with a huge profit margin. It’s not that Google is losing money on this deal, per-say, but it doesn’t seem that they’re making enough profit to survive on this business alone. Luckily, this isn’t a problem for Google, as they have plenty of other revenue streams. But, it does lead to a question of Google’s true motivation in this move.
So, I offer here my theory: Google is becoming an ISP to put pressure on existing ISPs to upgrade their infrastructure to be able to provide cheaper and faster access. Currently, there isn’t an ISP I’m aware of that can come close to the deal Google’s making in Kansas City for residential customers. If Kansas Citians realize this, they’ll flock to Google, leaving bigger ISPs (Comcast? AT&T?) in the dust.
Of course, as businesses, Comcast and AT&T should do whatever they can to keep their customers. At first, they might just offer to lower prices for customers that threaten to leave, but that really can’t maintain them long-term. They will be forced to do something to remain competitive. Especially if Google starts to move into other areas.
Remaining competitive will mean upgrading networks. That’s just a simple fact. The current residential Internet infrastructure just can’t compete with what Google’s offering. And as long as things remain as they are, the major players have no motivation to put money into upgrading their networks. I believe we’re seeing essentially the same thing with cell phone providers in the US: there isn’t enough competition to force true network infrastructure upgrades.
So, here’s my predictions for the next 10-20 years:
- Major ISPs in Kansas City will slowly start to upgrade their infrastructure and offer lower prices in order to compete with Google’s offering.
- Google will start to move into/threaten to move into other areas.
- Large ISPs will (and this might be very wishful thinking) learn from the Kansas City situation and per-emptively start upgrading their infrastructure across the nation.
After all this happens, it doesn’t really matter if Google moves into other areas, or even continues to offer service. The end result is that they’ve accomplished their goal, and that’s good for all of us. If things proceed how I think they will, the US will see some major improvements in its residential Internet in the coming years.
Building a Better WHD Notifier
by TrueJournals on Jun.29, 2012, under programming, technology
At my work, we use Web Help Desk to track support tickets. It’s a very nice, very slick piece of software, and from my understanding, version 10.2 implements an API going forward. Unfortunately, we use version 10.1.10.1, which doesn’t have the same API.
The problem I ran into was this: often, there are other tasks to be completed, so I’m not just staring at the WHD ticket queue the entire time I’m at work. However, if a client logs a ticket, I have no way of being notified without going to the web help desk page and checking manually. This results in one of two things happening: either I check the ticket queue every few minutes, breaking my workflow on other projects, or a ticket doesn’t get responded to for a while.
I wanted a way to be notified of new or updated tickets that pop up without having to continually go back to the WHD tab of my browser to actually check for new or updated tickets. Unfortunately, without an API, I thought this was going to be difficult. Until I found this: a WHD Notifier for Chrome.
However, this still didn’t quite achieve what I was looking for. While this did provide me with a notification of the number of new or updated tickets within the browser, it didn’t notify me when this number changed. So, I still had to watch my web browser, and even then, continually check this small, inconspicuous number in the top-right corner of my screen. Since Chrome extensions are simply Javascript, HTML and CSS, I decided to try my hand at modifying it to better suit my needs.
Here’s the list of things I needed:
- Pop-up notifications of a change in the number of new/updated tickets (like in Gmail).
- The ability to not save username and password. The computers share a single profile between multiple users, so I don’t want individual passwords saved past the browser session.
- (Optional, if I could figure it out) Grabbing login info from the WHD web login, so you don’t have to go into the extension options every browser session.
After some coding, a lot of trial and error, and a lot of research, I was able to meet all three goals. If you’d find it useful, you can download WHD Notifier version 1.5. Simply configure it with your WHD URL, choose what to display/be notified of, and you’re good to go. If you login to web help desk, the notifier will grab that login information and start updating.
If you’re interested in seeing the source code for this extension, you can find a zip of the source here. I’ll see if I can throw it up on GitHub later today or this weekend. There’s some interesting things being done here — submitting the login form in WHD sends a call to forms.submit() instead of going through the normal submit procedure — this means that I couldn’t simply add an onsubmit handler to capture the login information.
Instead, I inject another piece of JavaScript into the page which overrides the submit() method for all forms. This calls chrome.extension.sendMessage to message the background page with the username and password, then proceeds to call the submit() method I had overridden. You can find the form handling things in autoLogin.js and autoLogin2.js. I had to inject autoLogin2.js on the page because chrome content scripts aren’t allowed to override prototype methods.
If we ever update our WHD installation, I’ll try to update this extension again to use the new API — I should be able to add some more features with that, too. For now, this should work for all versions of WHD 10.0.8 or later.
Update:Code is now on GitHub
Automatically Associating Label and Input, and Setting Width
by TrueJournals on Jun.26, 2012, under technology
When creating forms in HTML, one of the best practices is to always use <label> to associate the labels with their form fields. However, this creates the complication of needing to give each label the “for” attribute, and perhaps wanting some extra magic to set the width of each label so that all the form fields are lined up. Recently, I decided to play around with form fields like this a little to see if I could automate this process using jQuery.
Here’s what I came up with:
var w = 0; $("label").each(function() { if($(this).width()>w) w=$(this).width(); // Assume the input immediately follows the label var input = $(this).next(); input.prop('id', input.prop('name')); $(this).prop('for', input.prop('name')); }); $("label").width(w+10); $("label").css('float', 'left');
And here’s what it does: first, it loops through each label on the page looking for the maximum width. When it finds that, it sets all labels to have a width of (maximum width)+10px. The extra 10px are just so the label and field aren’t directly on top of one another.
In the same loop, this code associates the label with the field next to it. For my sake, I made the assumption that I will always have <label>…</label><input />. If this is not the case, the code will associate the label with the wrong field. It simply sets the next element’s id to be the same as its name, and sets the label’s “for” property to be the element’s name, also.
Use this with caution. If you’re not careful, you could end up with duplicate IDs (a no-no). There are other reasons this code isn’t necessarily a good idea, but it’s good enough for me. For a page with a lazy HTML coder, this will work perfectly.
Bonus: Add input.prop(‘placeholder’, $(this).text()); inside the loop to also set the “placeholder” text to whatever the label is. You could also detect if the browser supports “placeholder”, and hide the label if it does:
var w = 0; $("label").each(function() { if($(this).width()>w) w=$(this).width(); // Assume the input immediately follows the label var input = $(this).next(); input.prop('id', input.prop('name')); input.prop('placeholder', $(this).text()); if('placeholder' in document.createElement('input')) $(this).hide(); $(this).prop('for', input.prop('name')); }); $("label").width(w+10); $("label").css('float', 'left');