Uncategorized

The Evolution of Programming Languages

Binary

Ah, sweet purity! I can write my 100 lines of code with complete control. Only deep machine architecture knowledge is required. Please don’t drop the stack of punch cards.

Assembly code

So close to the machine. I can write my 1000 lines of code with complete control. Only deep machine architecture knowledge is required. Dang, maybe I can write some macros for this repeated code.

Perl

I eat strings for breakfast.  Write once, read none. The compiler is you. Named parameters lets me write a function that takes 497 arguments. Idioms, idioms, idioms!

Basic

Because line numbers makes it easier to understand.

C

It’s like a super macro assembler. I know exactly what it’s doing and makes my 100k lines of code so clear. malloc() or alloc() and should I cast that pointer? Memory leak, schmemory leak! How many years until we get ANSI C and the C99?

C++

It’s the best and worst of C and OO! Strong typing until our friend C needs a reinterpret_cast<>. Every routine you need including those you don’t are already in the STL. So easy, just don’t forget that virtual destructor. Templates are the Perl of C++ (write once, read none). I can make my 1M lines of code so neat and clear.

Visual Basic

Drag-n-drop GUIs and 16-bit delight. It does everything you need but has nothing you want–like useable object orientation.

Python

Braces are for weaklings! It has types….sorta. Life is great in a single thread. I can self-comment the code whenver I don’t want to. Python virtual environments: they work virtually in someone’s dream. I can write 10k lines of code that does the work of 1M and still has the same bugs.

Lua

Never heard of it.  Every other single language uses “!” to indicate a NOT operator, so we’ll use “~” because we want you to write more syntax errors.  Does anyone really know how the garbage collector works?

Ruby

I came, I went. The Queen is dead, long live the Queen.

Java

Types, darnit! Everything inherits from Object including God. Write once, debug everywhere. Java applets will save the web! [crickets chirping] But, my 1M lines of code is so neat.

C#

We don’t want to share our toys in Java’s playground. We’re leaving!

JavaScript

Oh crap, people actually want to do things on the web? Hey, Brendan Eich, can you code us up a language over the weekend? Sure, we’ll call it “JavaScript” because it has nothing to do with Java. We don’t need no stinkin’ types since it’s just for small dynamic web pages. Oh crap, you do want full apps? We’ll fix it with Dojo, qooxdoo, jQuery, Handlebars, Underscore, Typescript, Angular, React, Vue, server-side rendering and a unified browser framework that does the same thing that JVM and applets did 10 years ago but now with more Retsin? I still want hard types so let’s use a TypeScript pre-compiler which combined with ESLint restores those cryptic C++ template-like compiler errors you were so (not) missing. I can make 1M lines of code unclear and just as buggy.

Go, Rust, D, Elm, Kotlin, Crystal, Elixer, …

OK, maybe this time…

Advertisement
Standard
node.js, Web Development

NVM FTW

The Technical Problem

It’s not uncommon to have to manage multiple versions of Node.js on one development machine.  This challenge is exacerbated when the developer is having to maintenance multiple websites, some of which may be quite old and making use of older versions of Node.js and Node global tooling modules.  It may not be appropriate to update the project to the latest/greatest versions of modules because of breaking changes and limited time for refactoring.  Just as common is the situation where a developer may wish to experiment with a newer version of Node.js or a different version of a global module without disrupting their development environment.

NVM to the Rescue

One way to simplify the management of multiple versions of Node.js is via a tool called Node Version Manager (NVM) on Unix-based systems and NVM for Windows on Windows-based systems.   NVM (for Unix) is a creative set of bash scripts which ensure the specified version of Node.js is the one used in a given shell environment.  The Windows version is similar in functionality and an equally clever endeavor which is implemented in Go and uses symbolic links to facilitate pointing the normal Node.js installation location to a different actual location.  Follow the previous links for information for how to install each.

Benefits

NVM permits installing and use multiple versions of Node.js on a development system without having to continuously uninstall and re-install.  This ease-of-use is helpful when trying out new versions of Node as well as different versions of globally available modules (like Angular or React).  One feature of NVM is that it keeps separate locations for each Node.js version’s global modules.  That is, when you switch versions of Node.js you get a unique global node_modules directory.  This uniqueness allows you to install different versions of global tools to coincide with a Node.js version.  This facilitates easy switching between older projects using older Node modules which may require older global modules with which they were originally built.  This feature only applies to the global modules as individual project’s node_modules directories behave the normal way.

Dev Tips

It can be helpful to have a script with each project that specifies both the version of node the project expects as well as the global modules it may needs.  For example, a bash script named “install-required-tools.sh” could be added to the top-level of  a project and might look like this:

#!/bin/bash

nvm install 10.0.3
nvm use 10.0.3
npm install -g @angular/cli@6.1.2
npm install -g eslint@5.3.0
npm install -g typescript@2.7.2

# Install other local project modules
npm install

On Windows, the “nvm use 10.0.3” may require a mouse-click confirmation if the version of Node.js isn’t already active.  No manual confirmation is needed on Unix, which makes it much easier for automated builds.  Also, in Unix, you can work with different versions of node simultaneously in different terminals sessions.  Not so much in Windows, but nvm still makes it easier to switch back and forth.  The script above only need be run when someone changes it (e.g., to bump the version of Angular.)

The added benefit of such a script is that it means the project no longer need to document a list of commands to run or things to install globally in a README.md or project wiki.  It need only document, “run this script”.  Such an update is easy to communicate to a team, “Hey guys! Run the install-required-tools.sh script when you pull the latest!”.

More Complicated

There are limitations to this approach.  If you need one version of node in conjunction with multiple versions of global tools it may become more difficult.  In such a case I might begin to suggest the user of Docker or a VM for your development environment.

Standard
Web Development

JavaScript Is Always Changing

JavaScript Melting Pot

https://increment.com/development/the-melting-pot-of-javascript/

So no, it’s not the dependency iceberg itself that is worrying me.  It’s the proliferation of configuration options.

— Dan Abramov

Old Coder

When I started professional development in the early-90’s I only had a few software development languages and tools at my disposal.  C++ was just gaining steam and available to me on my PC in MS/DOS via Borland Turbo C++ (which I bought at a student discount.)   NCSU provided very nice Sun SPARCstations in their computer labs.  I spent many-a-late-night in the labs, but for personal use no student could afford buying a SPARCstation.  Microsoft Visual Studio was evolving in Windows and sometimes unstable.  Linux was spanking new.  Java was also new (and controlled by Sun), but didn’t start rolling hard until about 1995.  The primary source of learning new development-related skills was via books (not yet the Internet).  The pace of change was slow and controlled by companies who built the compilers, just as as Abramov indicates.  When Microsoft rolled out .NET and Windows XP circa 2000 it took about 3 years for .NET to gain full uptake in the Windows ecosystem.  Even I shied away from it until .NET hit 2.0.  .NET’s growth required a huge advertising and educational push from Microsoft to encourage developers to adopt it and yet they still had to keep around older API’s like Microsoft Foundation Classes, COM, ATL, and WTL because of the massive quantity of legacy code and developer lock-in.

Nowadays…

Does this globulous menagerie make you feel anxious?

Gulp Bower Grunt Yeoman NPM

What about today? We should expect at least one new software development language a year to appear and gain some traction.  How many transpiled-to-JavaScript languages have appeared in the last few years?  Expect a whole new paradigm of dev tools with JavaScript at least once or twice a year.  I think this proliferation is the beauty of open source and from the abundance of developers.  There are many more developers now than when I got started.  There are many smart people in the ecosystem who are tired of waiting around to convince someone else to fix their problems.  Smart folks argue best by making stuff.  The modern JavaScript ecosystem both scares me and makes me giddy.  If I work on something in June, then come back to it in October there’s a good chance what I was using is now outdated.

This pace is frightening and frustrating for those developers who have a nagging desire to always be using the best tool while craving stability.  When the tooling keeps changing there is no “best” tool.  A particular challenge in the modern JavaScript environment is creating longer-life corporate web-based products which need to be around for a few years to make good return on the development investment.  Thus, finding peace requires a change of mindset.  I liken it to the metaphor of grabbing a morphing cyborg by the hand and learning to dance.

Good Practice Make Good Play

The core challenge with all the new tools (and all the old tools) is figuring out how to apply good development practice and methodology.  Good practice and design concepts are timeless.  An amusing part of entering the JavaScript environment as a mature developer is watching the ecosystem walk through well-known growth pangs taken by all maturing development environments.  A good example of this truth is the rising popularity of TypeScript.  Duck-typing is great for single-developer smaller projects.  But, when you need to scale a large application and involve lots of developers it causes problems.  This was known over 40 years ago, but JavaScript was not initially intended for such large scale development.  Now it is.  The Community responded.  Awesomesauce!

Nowadays Was Yesterday

My most recent efforts have been using React dev environment with Inferno (because React’s patent clause freaked many folks out and causes confusion).  But, wait, Facebook is dropping the patent clause for React.  Maybe we can move to full React.  Time to change…

Standard
Uncategorized

Backup, Schmackup!

New Backup Hardware

Recently, after putting off a data backup overhaul and almost suffering catastrophe (read below) I recently invested in a Synology DS216+II with two 4TB drives.  It was a significant investment.  But it provides:

  • Always on access (it will power down into a sleep mode to conserve energy when not active)
  • A rich set of apps (Android, iOS, AppleTV, web-accessible) for me to connect to it wherever I am including video, audio, pictures.  The apps are top notch.
  • Provides dynamic DNS (they provide the service) which is usable by all the apps and let’s me connect to it from anywhere
  • Very easy to setup and configure.

As home NAS comparisons go the top two manufacturers (as of my own recent survey) are QNAP and Synology.  A very easy simile between the two is Synology is to QNAP as iOS is to Android (in terms of interfaces—not legal approaches).  QNAP gives more tweaks, but isn’t quite as intuitive.  I probably would have been fine either way.  Do your own research.

Synology DiskStation will allow you to use a simple file hierarchy for your photos, videos, and music files and still take advantage of their apps for easy perusal.  Of course, you can just use them like a network disk.  I like this file approach as it’s the most universal and non-proprietary dependent layout for your files.

Offsite Backup

I use Backblaze to backup my individual computers offsite.  I want security if my house burns down.  In addition, I have a Microsoft Family Office 365 subscription (wife needs the latest MS Office for her contract work) and it comes with 1 TB of OneDrive folders for each user.  I use one of the accounts as a “backup” account and have Synology’s easy built-in cloud sync app push my videos and photos folders to that OneDrive account (I’ve set it to push-only so there are no accidental deletes if Microsoft borks my data).  I also replicate the videos/photos folders to my Mac via the built-in Synology sync folder software so it’s doubly-backed-up to Backblaze.  Now I have a “I don’t have to think about it anymore” solution.

Paranoid!?  I’m not Paranoid!

Here’s why I go to all that trouble for offsite backup.  I almost lost major data because of not fully understanding the fine print.

A Long Short Story

Before the NAS I had an old iMac which ran the Server app and served as the Time Machine backup for all my home Macs. It also served up the music and video files via iTunes.  They were all locked into Apple’s formats.  I didn’t have a way of sharing photos easily with the immediate family other than I did the occasional post to my SmugMug account.  I was using iPhoto and its proprietary format and had not yet migrated to the new “Photos” yet-another-proprietary photo-storage app.   I don’t want to fork over my moolah for the iCloud hosting as I don’t trust Apple in regards to cloud stuff.  Cloud storage isn’t Apple’s core competency and I have a very technically competent friend who had Apple lose lots of his critical files in iCloud and it wasn’t his fault.  I was already using Backblaze for offsite backup of all the Macs.  Flashback to several months ago when we started a renovation at our house.  I disconnected the iMac and didn’t reconnect it for 3 months hedging my bets that the offsite Backblaze backup will suffice for my and my wife’s laptops.  Upon reconnecting the iMac I see the internal secondary hard drive has died.  No problem, I thought, as I still have an off-site backup of those files.  I check to verify the files were still remotely archived (in the April time frame), and I figure I can restore whenever.  I order a new hard drive and didn’t get around to reconnecting until July.  I feel confident that all my laptops are being remotely backed-up and I can restore the old photos and videos which had resided on the old server later.

Upon adding the new hard drive to the server I log into the Backblaze’s interface to restore the files and, whoops, they have vanished.  Long story short, they hadn’t seen my machine for a long period of time and they have the policy that since they aren’t an “archival” service they start to dump my files after 30 or more days of not seeing them.  In addition their software seemed to wrongly considered my secondary internal drive as a temporary external drive.  Gratefully, I was able to talk with their tech support and they restored my files from some way-back cavern of wherever they stored them (with only minor loss).

I almost lost my most important files (home videos and photos) and would have had to eat a huge cost of trying a forensic hard drive retrieval to get them back.  As much as I like Backblaze their fine-print was not very clear regarding secondary hard drives.  Furthermore, they don’t really specify what files they are deleting when they start the purge.

So, I trust no-one.  I have my own multiple back-ups:

  • All Macs using time machine backup on RAID-1 Synology NAS
  • Photos and videos replicated on NAS and a local (iMac) machine
  • Files on Macs backed-up offsite using Backblaze
  • Photos and videos synced to OneDrive

Two offsite back-ups is a bit of overkill.  One other option I could have considered was dropping Backblaze and buying a duplicate NAS.  Synology can setup a replication service between NASes and I could have a friend with adequate bandwidth host it at their house.  It would have paid for itself in a about 5 years, but felt that was too much effort.

I also considered Amazon Glacier and Backblaze’s own B2 service.  But, I was concerned about the administrative effort of picking and choosing the files to back up on the Macs and felt the additional cost for the brain-dead solution was better.

Standard
Linux, Software Development, Web Development

Cloud9 – Edit Your Code From Anywhere

cloud9The Gist

Cloud9 is a powerful web-based source code editing environment that has recently solved the major headache of editing files on a device located across a slow network connection.  This post is not a description of how to use the features of Cloud9, but more simply a rationale for why I use it as well as how to install it and get it up and running.

If you want to skip past my pithy, verbose, and poor pontification and just learn how to install Cloud9 click here.  

Big Fat Disclaimer

Choosing one’s code editor is a very personal decision.  We developers get much too much enjoyment in the arguing of “my code editors better than yours.”  Pick the one you like the best which helps you get things done.  If you’re happy using Windows Notepad and it lets you accomplish what you need to then more power to you.  Most development is about 20% clerical and 80% cognitive (I totally made up those numbers to make this article sound more intelligent).

The Pain

I have had worked in several circumstances where the build environment was complex enough that I couldn’t build it locally on my laptop.  While located within the company’s speedy network (behind the firewall) I typically had no issues with file access.  However,  when I needed to access the company network via a slow VPN things got untenable.  I’ve had other similar circumstances where remote editing was problematic.  Do any of these sound familiar to you?

  • The primary build environment is very elaborate mix of cross-compilers and requires access to many network located tools and resources.
  • Tool licensing limits its location to a single build machine.
  • Company security policy prohibits retaining source code on systems which leave the office.
  • VPN access is available for personally-owned devices, but company policy prohibits keeping code on personally-owned devices.
  • VPN access to network file shares is exceptionally slow and any delay in the edit, save, and recompile workflow hampers productivity.
  • The build environment is Linux, but the company only issues Windows laptops.

In scenarios where I had a company laptop on which I could retain the source code, I’d tried all manner of techniques for editing.  Here are some of the things I’ve tried.

  • Used a local git repo on the laptop:  I’d push changes to the company-network-located device over the VPN.  The problem with this scenario is that it requires a commit-push on the laptop, then a pull or rebase on the company device.  While this ensures changes are well preserved, it is a pain to when desiring a fast save-recompile cycle.
  • Use Dropbox: Drop is exceptionally fast to sync files between two machines.  However, it also replicates the files on Dropbox’s servers.  This is a understandably unacceptable for many companies in terms of preserving IP on company-owned devices only.  There are tools like Seafile which are Dropbox-like and allow keeping the files only on company-owned hardware.  However the fastest syncing I’ve experienced with Seafile is 30 seconds over a VPN.  This is not tenable for a fast workflow.
  • Use a a Samba-share for direct edit and save:  while this works when you only need to edit a single file, working on a whole source tree over a slow network share is painfully.  Try grep-searching for matches over thousands of files on a VPN connected Samba share.  If you have a good movie to watch while you wait (like “Ghandi”) then you’ll be fine.

What about vi or Emacs?

Any coder who slings “vi” or “Emacs” may attempt to slap me into submission with their very valid point that these editors have been allowing remote editing over nothing more than a terminal session for decades.  For these tools slow bandwidth connections generally aren’t a problem.  Furthermore these tools are infinitely extensible and fast.

To you, I reply, “Yes, but I want my GUI”.  See, there are these things call Graphical User Interfaces.  They make use of an interface called a mouse cursor, which allows for selection of text and novel things called “context menus”.  And these tools have been generally available for about 25 years.

All snarky comments aside, vi and Emacs, when used by experienced users, are things to behold.  And, with proper terminal configuration support mouse interaction.  But, the learning curve for both those tools is very steep.  When you decide to use those tools you must commit.  I mean, you go all-in.  You are in for a significant learning curve to wield them.  Vi was originally created on terminals that didn’t have cursor keys.  It is my opinion as a certifiable UI snob that vi and Emacs are not intuitive at all.  They made lots of sense for terminals of the time.  But for modern computers which have full interactive GUI’s at their disposal they tend to force a text-only/keyboard-only interface.  For all you vi and Emacs masters, I solute you.  Yes, be smug in your “skillz” for you can be proud.  But, like using the Perl programming language, though I need to know enough to get by using it, I don’t have to like it.

I confess that if instead of spending as much time as I have spent on my quest to find the perfect web-based editor I were to instead used to learn vi or Emacs, then I likely wouldn’t be writing this article.  But, you, the reader, can now benefit from my pain and not feel guilty in not mastering vi or Emacs to solve your remote editing requirements.

Setup and Install

These instructions assume a Linux or Mac host.  I have not tested or installed Cloud9 on a Windows machine and am not sure how well it works.  If using an Ubuntu-based Linux distro I found that I needed to install the “build-essentials” and “python2.7” like:

  > sudo apt-get install build-essentials python2.7

Preparation

Setting up and installing Cloud9 requires a few considerations:

  1. By default it installs to the ~/.c9 directory.  If your computer uses a shared network home directory with limited quota you might consider first creating a symlink for this directory to another less-limited local directory.
  2. Be sure you have a reasonably current version of Node.js for the install.  Right now Cloud9 is using Node v4.4.6.  The Cloud9 build will even install its own version of node (I show how to make use of it below).  I highly recommend the use Node Version Manager to manage the versions of Node.js you use.
  3. You will need the “git” tool.  For OS X the easiest avenue to attain it is to use brew.  However, I chose to install Xcode.  Though it is a large download, it’s handy to have.

Install

The primary installation instructions are here:  https://github.com/c9/core.  I’ve copied them here (it really is just three lines).

From the terminal run these commands:

git clone git://github.com/c9/core.git c9sdk
cd c9sdk
scripts/install-sdk.sh

The install may take several minutes as it must download several npm packages.  There are further instructions on the website about how to pull down updates to Cloud9.  This is a two step process which consists of going into your local git repo, doing a “git pull”, then re-running the install-sdk.sh script.

Launching It (There’s a bug!)

According to this link there’s a quirk with the installer that requires you to run this command after running the install (from the c9sdk working directory):

git checkout HEAD -- node_modules

I have found the best mechanism for running Cloud9 once it is installed is to use the version of node.js that is built/installed with Cloud9.  Below is a handy script for running Cloud9.  I typically navigate into the project directory I want to edit and then run it.  Cloud 9 treats the directory in which it runs as the “project”.  You can run it from your top-level home directory if you wish, but it will treat your entire series of directories as the project and include them in searches, which will slow it down and noise up your results.  I find it better to limit it to the directory of the project I’m editing.

I will often run this script via “screen cloud9” so as to enable me to keep it running in the background when I close the terminal session. More info on the “screen” tool can be found here.

Helper Script

Copy the contents of this file into a file named “cloud9” and put it in a directory included in your path.

!/bin/sh
project=${1}
port=${2}

if [ -z "${project}" ]; then
 project=$(pwd)
fi

if [ -z "${port}" ]; then
 port=8181
fi

~/.c9/node/bin/node /path/to/git/repo/on/your/computer/c9sdk/server.js -l 0.0.0.0 -w ${project} -p ${port} -a nerdy:guy

Example of running the script

> cd ~/my/awesome/project/of/destiny
> cloud9

Yep, it’s that simple. The first time you access Cloud9 via the web it will kick of lots of compilations of Less CSS (you’ll see lots of these in the output.)  They only happen the first time.  A “.c9” directory will be created in the directory in which you run Cloud9.  This is where it stores its state.

Things you need to change (and consider) in this script

  • Remember to “chmod u+x” the script to be executable.
  • Update the “/path/to/git/repo/on/your/computer/…” to be the path to the c9sdk directory you created above.
  • Note you can specify a port on the command line.  If you don’t it will default to 8181.  This ability to specify a different port will allow you to run multiple instances of Cloud9 at once if you need to edit two different projects separately.  The first time you run it on a Mac you will be prompted about allowing external connections (via OS X’s firewall).
  • Set the username and password.  The “-a” parameter allows setting a username and password if you want one.    Omit this option if you don’t want to limit access.  It is only simple protection (in the example I use “nerdy:guy”.)  While this does not provide any type of encryption (you will need to do your own research on how to accomplish that) it does provide a small bit of protection to your code.  Since my Cloud9 server is only accessible within my company’s well-protected VPN I felt safe with this minimal protection.  Again, research how to use an HTTPS proxy (like apache or nginx) to provide HTTPS.

Accessing You Cloud9

Now, to access your Cloud9 session (if started at port 8181) simply open your web browser and type in the domain name of your machine (or IP address) along with the port.  For example, if your device is at 192.168.1.101 then you’d enter

http://192.168.1.101:8181

You’ll be prompted for your username and password.  Enter and enjoy!

Troubleshooting

If you see lots of ENOENT errors you may have a permissions access issue.  It could also be a node version issue.  I found these issues went away when I used the script above to ensure I was using the right version of node.js.

Summary

I really like Cloud9.  While it is not my main editor (I use Sublime Text) it could be.  I’ve been impressed with the continued improvement that the development team has put into it.  I like that they’ve kept it open source.

The Pros

  • Last editing state is preserved when you close the browser.  You can pick up from a computer right where you left off.
  • Terminal sessions are preserved.  The terminal emulator that is uses (tux) allows keeping your terminal sessions going even when you close the browser.
  • Lazy search for files
  • Multiple panes for viewing files
  • Ability to browse the local file system.  One of the options is to enable the viewing of the home path along with the project.  The home path will be excluded from searches while still allowing you to browse it.
  • Works well on low bandwidth connections
  • You can actually drag and drop a local file into the window.  Saving the local file is a bit tenuous, but it is useful for comparing local to remote files
  • Debugging node.js type server-based apps is well done

The Cons

  • Changing theme colors isn’t intuitive and requires trial-by-error to tweak the CSS
  • Because it is in a web editor it suffers from the application-in-an-application discontinuity (web app within a browser).  Full screen browser mode helps this discontinuity some.  Likewise, the Cloud9 Chrome “app” can help.
  • I haven’t found a Cscope-like plugin for navigating C/C++ code and symbols.  I should write one…
  • Support for C/C++ development isn’t as strong as the web development support.

 

Standard
Uncategorized

Creating Chrome Apps from Websites

In Windows and Linux, Chrome allows you to save a website as an “app” icon to either your desktop or start menu.  What’s nice about these apps is they run without the surrounding browser noise and are treated as separate instances of Chrome (you can open and close them without opening and closing Chrome proper).  Such an icon is really handy for apps like Cloud9 IDE.

Unfortunately, the same is not provided Mac (who knows why?)   However, with the help from this post some nice fellow geeks have written a bash script as well as an AppleScript way to do the heavy lifting.  See the link below for more information (scroll to the bottom of the article for the link to the AppleScript).

https://www.lessannoyingcrm.com/blog/2010/08/149/Create+application+shortcuts+in+Google+Chrome+on+a+Mac

Standard
ChromeOS, Google, Linux, Opinion

Chromebooks

I took an interest in Chromebooks about a year ago (late 2013) wondering what the benefit of a web-only-ish laptop was.  At the time I found a deal for a refurbished Acer C720 for only $160 and took the plunge.  I had ulterior motives to also use it as a Linux device, as I’d read many had done.  This post is a rough overview of the pluses and minuses of Chromebooks.  It isn’t exhaustive, but I do think Chromebooks serve a very useful purpose in a hacker’s toolkit.

Chrome OS Benefits

I was immediately impressed with Chrome OS and find it useful in a few ways:

  • As an uber-cheap and secure “internet portal” it is the perfect device to take on a trip where you are concerned it might get lost or stolen.
  • The Chromebook hardware support is there in the BIOS to ensure that your data stays protected. In fact, if you’re worried it might get snooped-on as it goes through customs, you could log out of it and wipe your account, then log back in once you get to you destination (assuming you trust the internet connection you’d be later making.)
  • It supports multiple users quite well.  My wife and I can log in with our Google accounts and hand it back and forth as needed.
  • It’s light and fast.  It underscores just how bloated operating systems have become.  Because it is stripped down to the basic essentials it does everything very quickly.  It’s just enough OS and GUI to let the Chrome browser run.  Even on the 1.4 GHz dual-core Celeron with 2GB of RAM that the Acer C720 is it runs fast.

Chrome OS Not-So-Greats

Here’s where Chrome OS doesn’t work well (these are rather obvious):

  • You must have Microsoft Word/Excel/Outlook (the applications) and can’t make use of (or don’t have access to) Microsoft 365’s web-based tools.
  • You need some PC-based, iOS-based, or Android-based software that doesn’t work on ChromeOS.

Corn stalksChromebooks in Education

I have read that Chromebooks are making good strides in the education market.  Honestly, I think they are a fantastic option for schools.  Before mentioning Chromebooks benefits I do want to mention iPads.  I have an iPad.  I loooooove my iPad.  My iPad rocks the Casbah.  While iPads are seeing good use in education.  However, iPads have some drawbacks in education use:

  • They are expensive
  • They don’t have a keyboard attached.  While younger folks are likely more adept at using the on-screen keyboard, I find it very hard to enter lots of text using them.  Equally as frustrating is that half the screen is covered by the keyboard when it is visible.  Sometimes, you really need cursor keys, even when you can touch the whole screen.  Though one can get very decent keyboard cases (I got an awesome deal on a used one), many are wireless.  Lots of wireless keyboards in a space like a classroom can be problematic.  It’s an added expense to an already expensive device.
  • They don’t multitask well.  Switching back and forth betweens apps is extremely helpful if you are reading a website and needing to write a report or enter some information in another application.  Switching between apps in iOS requires either using the the doesn’t-always-take four-finger-drag or double-tap of the home button.  You must wait for the pretty animation.
  • When the tablets are owned by the school system they must be administered.  This means instituting policies and installing those policies on the iPads.  This requires efforts.  Ensuring OS updates are installed also requires effort.  Doing all these tasks for many, many devices requires lots and lots of effort.
  • They can’t be shared by different users.  iPads are single-login-only devices.
  • Backup and restore are lengthy processes.  If a student is backing-up their device regularly they either need to have a PC or Mac at home and be fairly religious about it or they need to have iCloud backup enabled.  Restoring from an iCloud backup is a lengthy process.  Restoring from one’s home PC isn’t going to be facilitated by the school’s IT team.  So, if the device is damaged or faulty the student will likely be down for one to two days and could possibly lose significant work.

Here’s where Chromebooks have some great advantages in education:

  • They are cheap, cheap, cheap.  I’d feel much better about handing a student a $150-200 (bulk purchased) item than a $250-$500 device.
  • The user’s data is always preserved in the cloud.  While use of one does require a (free) Google account, the user never has to worry about losing the data and can access it wherever they need to.  It stay with them even if they lose their Chromebook.
  • If a user’s device is broken, they need only be handed a new one, log in with their Google account, and near instantaneously have all their apps and preferences back where they started.
  • They multitask just as well as a using a PC.  Just click between the tabs you want, or if you’ve split out your browser tabs you can Alt-Tab to the other window.  There’s even a handy keyboard function key for displaying all your windows at once.
  • They have a great keyboard and mouse already attached.  Note that I’m a fan of the chiclet type keys that MacBooks have popularized.  I know some folks don’t like them so keep that in mind.  I do not expect an IBM Model M keyboard to be released as part of a Chromebook.
  • They support multiple users.

It is my understanding that Google already gives free domains to non-profits and educational institutions.  It is important to note that iPads (or even Windows laptops), if owned by the school, do provide a mechanism of limiting the installation of software.  Likewise, there are many, many excellent iOS software programs available which might not have equals on Chromebooks.  Personally, I think iPads are preferable for reading eBooks.

Note that I compared iPads to Chromebooks.  I didn’t mention Android tablets.  I think Android tablets likely suffer the same issues as the iPads, albeit they may have a better mechanism for multitasking (some Android implementations, that is.)

Other articles that have some interesting commentary about Chromebooks in education:

Acer C720 Chromebook Hardware

Here’s what I like about the Acer C720 hardware:

  • The device is incredibly light.
  • It has an SD card reader, and HDMI hookup, a USB 3.0 connection, a great keyboard, and a great touchpad.
  • The battery that can run for about 8 hours.
  • The SSD is upgradeable.
  • With ChromeOS it is instant on.  And, I mean instant.  It wakes from sleep so quickly.

Here’s what is lacking from the $160 hardware:

  • A quality screen
  • A strong laptop body (one drop of this thing and it will surely shatter)
  • 2G of non-upgradable RAM

Because the attempt is to make the laptop very cheap, the result is a very cheap laptop.  The screen on the C720 is pretty lousy, but for $160 I wasn’t expecting much.  There are more expensive and better quality Chromebooks available.  However, I find that once you enter the price range of a better-quality Chromebook you could likely have purchased a full capability laptop and put Linux on it.

Running Linux on a Chromebook

FungusIf you want a semi-decent Linux box, the Acer C720 (and most other Chromebooks) provide an avenue to put the hardware in developer mode.  This mode allows bypassing the special boot security so that you can install another operating system.  Note, however, that by doing so you do open a hole in the security Chrome OS and the hardware vendor have worked so hard to provide.

I enabled developer mode so that I could also install Linux.  I first tried using “Crouton” which allows you to run Linux side-by-side with Chrome OS.  Because my built-in SSD was only 16GB I had to use an external hard drive.  Even using an external USB 3.0 SSD enclosure, booting Linux was pokey.

I decided to invest a bit more money in my C720 and put in a 128 GB SSD using these instructions.  I was able to get one for about $65.  Yes, it’s almost half the price I paid for the Chromebook.  I could have gotten a 64 GB SSD for about $50.  This brings home the point that if what you really want is a cheap Linux laptop you are probably better off spending a bit more money, waiting for a good sale, and buying a $300-ish laptop which affords you lots more flexibility and a better screen.

Using the larger SSD I installed ChrUbuntu using these instructions.  Much props to the guy who created all the install scripts.  His website is a trove of great info.  I can now dual-boot into both Linux and Chrome OS.  The biggest downside to this little Linux laptop is that 2GB of RAM is rather tight, even with a minimal window manager.  Chrome on Linux (and other OSes) has become a rather RAM hungry.  Running more than a few tabs and other programs quickly exhausts the memory and results in swapping.  However, even with these limitations, I really like my little Linux laptop.  It’s rather peppy.

Be mindful too occasionally have Linux perform a TRIM operation to keep the Flash performant.  This article has helpful information about TRIM on Linux.  See the bottom of the article to find a quick-and-easy script you can use.

Use of Chromebooks by Non-Techies

A non-tech-savvy friend of mine was in need of a device for accessing the internet.  She had a very limited amount of money to spend.  So, I recommended she purchase a Chromebook and it has worked out very well.  I set her up with a Google account and she can email and access all the things she needs.  She never has to worry about viruses, and the Chromebook will update itself (it only requires an occasional reboot which takes all of 30 seconds.)  I highly recommend Chromebooks for folks who are technophobes, or for elderly folk who may be getting into computers for the first time.  They are so much easier to maintain.

Chromebook Future

One area I think Chromebooks will continue to excel in is capability for their low price.  Good screens will continue to become cheaper.  Likewise, fast processors will become cheaper.  As of the time this post was written (February 2015) you can buy a nice brand new Acer Chromebook for $160.  It’s almost twice as fast (the CPU) as my Acer C720, which I bought less than a year ago as a refurbished model for that same price:

http://www.amazon.com/gp/product/B00MMLV7VQ/

Knocking on Intel’s door are ARM-based processors.  Intel has been the prime provider of low-power, fast CPU’s.  Because Chromebooks merely act as the conduit for running web-based software, they are prime candidates for running atop non-Intel processors such as ARM processors.  In fact, many ARM-based Chromebooks already exist.  As of now, the ARM-based Chromebooks aren’t nearly as performant as the Intel-based ones (like the Acer C720).  However, this gap is quickly dropping.  I wouldn’t be surprised to see many more ARM-based Chromebooks being released over the coures of next year which will be more than fast-enough.

Standard
Apple Mac

A Lovely Bug in Mac Mail.app

The Niceties

One of the niceties of OS X (since 10.6 Snow Leopard) was its ability to directly connect to a Microsoft Exchange Server (as long as it has service pack 2 or higher.)  One of the things I like about Mac Mail is the fact that it is a dedicated app.  While Microsoft Outlook isn’t “bad” I find that when I want to look at my mail and my calendar it forces me to switch between the two.  Mac Mail and Calendar being separate apps makes this much easier. I like the fact that they address two different work domains and thus two different apps.

The Bug

At my day job I am using 10.9 Mavericks.  As Apple makes it quite simple, I was easily able to connect to my employer’s Exchange server.  I knew that the “automatic” feature of checking email in Mail’s preferences didn’t work very well for receiving new mail in a timely fashion.  So, I simply set it up to poll for new mail about every 3 minutes.  I’d set it up similarly before at a previous employer and it had worked quite well.  However, I noticed after about an hour, I would stop receiving emails.  I could close Mail, then reopen it and I would start receiving mail again.

Screen Shot 2014-04-16 at 8.53.09 PMWhen opening up the “Activity Window” via the Window->Activity you can see the ongoing operations.  What I saw when I stopped receiving mail was that it appeared to be hung while reading mail.  The picture to the right what Mail.app looks like when successfully querying Gmail.  The stop sign icons allow you to stop requests.  Similarly on the hung Exchange Server request I could cancel the current operation and the subsequent requests would work again…for awhile.

No Solution, Yet

It seems this is a known bug and Apple suggests a solution.  Their solution is effectively “turn it off and turn it back on agai“.  Other suggestions imply it manifests itself in a variety of ways all of which make Mail.app not very useable other than ensuring you are no longer bother by emails (since it will stop checking them).  I got so frustrated I have resorted to using Outlook on the Mac.

The forum post suggests that Apple isn’t handling an edge case of authentication negotiations and dropped connections.  While Apple might argue “Microsoft isn’t doing it right” it doesn’t matter from the perspective of your users, Apple.  Mac Outlook works and older Mail.app on Lion works.  If it requires a bit of a hack to fix the issue, then solve the problem of your user’s pain, Apple.

 

 

Standard
Embedded Development, Linux

Building a Cross-Compilation Environment for the BeagleBone

Do What?

BeagleBone

BeagleBone

Want to write a native executable for the BeagleBone?  You have a few options.  One is to compile the code directly on the BeagleBone with the native compiler that is included with the BeagleBone distribution you are running (if it includes one).  The other is to use a cross-compiler and compile the code on another computer.  This first option will work fine for smaller programs and the official distribution for the BeagleBone is Angstrom Linux-based and includes a compiler.  However, compiling on the Bone itself will be slow for large code bases (like the whole Linux kernel.)  Cross-compiling is better suited for bigger software programs and is much faster on modern PCs.

Here are a simple set of instructions for building the cross-compiler tools for two of the more popular Linux distros to allow you to compile native applications for your BeagleBone.  This information isn’t new and I’ve included the links to the original material below.  However, I’ve added a few tidbits more of information for compiling in either an Ubuntu or CentOS.

What OS Do You Need?

This tutorial assumes you have already installed Linux natively on your hardware or as a virtual machine.  I used a VM in VirtualBox to accomplish these steps.  One VM was an Ubuntu variant (Xubuntu to be exact) and the other VM was CentOS.  Both worked well.  Ubuntu seems to cooperate better out of the box with the VirtualBox Guest Additions to allow the VM to operate more seamlessly.  This is primarily because the base Ubuntu distros include more packages.  CentOS is a slimmer default install but will work fine, too, with some manual tweaking.  The Guest Additions are helpful for things like cutting-and-pasting text to and from the guest OS and for resizing the VM to fit various window sizes.

I allotted both CPUs of my dual-core laptop as well as 2 GBs of memory to the VM.  This much memory is required for the build as the OpenEmbedded/Yocto compilation process builds up some large caches that require the memory.  Anything less than 2 GBs will likely cause build failures.  Once the cross compilation tools are built, you can reduce the amount of memory for the VM if need be.

Note, too, that the build process will generate almost  20 extra Gigabytes of data.  Therefore, count on sizing the virtual hard disk of your VM to be able to handle this file data.  I typically use a dynamic type disk in VirtualBox, but only allocate a small portion of it to the OS.  This let’s me grow it later.  I started out giving Linux only 40GB of a 120GB disk.  My CentOS VM started out at about 3 GB and grew to almost 23 GB.  In retrospect I probably should have gone ahead and given it 100 GB so as not to have to worry about do other kernel builds while maintaining my old ones.  I can always resize the partition later.

Get the Linux Packages You Need

Before the Angstrom Link process will completely build, you need to ensure you have all the required packages on your build OS/VM/PC/Computerthingy.

UbuntuPackages for Ubuntu

Using apt-get, acquire the following packages for your build:

sudo apt-get install git subversion gawk texinfo texi2html chrpath diffstat gcc gcc-c++ libgcc libstdc++ make

CentOSPackages for CentOS

Using the yum installer, acquire the packages below for your build.  Note that if you followed the CentOS instructions for setting up the Virtual Box Client Additions, then you will have already acquired some of these packages so this will progress quicker:

su -c 'yum install git subversion gawk texinfo texi2html chrpath diffstat gcc gcc-c++ libgcc libstdc++ make'

Grab the build scripts

To start the build process you’ll next need to grab the build scripts. These build scripts are based on the OpenEmbedded project’s build system of BitBake recipes.  This system is an alternative process of constructing Linux builds vs. the Buildroot method.  The build scripts are responsible for pulling down the required source, setting up the build environment, and then executing the build.  First, from your main user directory, make a directory to hold the build scripts.  I often named it “development” but you could name it “Francis”, “lemonmeringue”, or “portuguese_antelopes” if it makes you happy.

mkdir development

Change directories into this directory and retrieve the build scripts from the official Angstrom repository:

cd development
git clone git://github.com/Angstrom-distribution/setup-scripts.git

Note that the distribution of Angstrom Linux I was running on the BeagleBone which I was testing was an image created in November of 2012.  Since I pulled the latest version of the Angstrom distro and followed the generic instructions for building the BeagleBone kernel and libraries, I’m assuming it will work with the latest, too.  I had tried using the latest version of the official BeagleBone Angstrom distro (June 2013), but experienced trouble with the gadget driver not even showing up (at least when interfacing to my MacBook.)  This build compatibility implies that the kernel version has stayed similar enough to be binary compatible with BeagleBone builds in the last year or so.  If the builds aren’t binary compatible (compatible to the kernel headers) the application just won’t run.

Tweak It a Smidgen

The default build configuration will delete the kernel sources once the build is complete.  This might be annoying if you want to make any tweaks or changes.  So, you’ll want to edit the configuration to prevent this.  Open the file “conf/local.conf” file (that is within the “setup-scripts” directory) using your favorite text editor and comment-out or remove the following line:

INHERIT += “rm_work”

You can comment it out by inserting a hash, “#”, in front of the line.  I chose to comment it out.

Build It Already!

From the terminal prompt within the “setup-scripts” directory type the following three commands in succession.  The first will take about five minutes to execute (depending on the speediness of your computer).  The second less than a minute.  And the third will really depend on how fast your machine it.  My CentOS VirtualBox VM running in a host OS of OS X 10.8 on a 2.3 GHz Intel Core i5 took about 3 hours.  The three commands are:

MACHINE=beaglebone ./oebb.sh config beaglebone

… lots of output gets spit out after this one …

MACHINE=beaglebone ./oebb.sh update

… lots of output gets spit out after this one …

MACHINE=beaglebone ./oebb.sh bitbake virtual/kernel

… lots of output gets spit out after this one ….  Here are the last several lines of the output from my successful build:

NOTE: Preparing runqueue
NOTE: Executing SetScene Tasks
NOTE: Executing RunQueue Tasks
WARNING: Failed to fetch URL http://www.apache.org/dist/apr/apr-util-1.5.1.tar.gz, attempting MIRRORS if available
WARNING: Failed to fetch URL http://cbuild.validation.linaro.org/snapshots/gcc-linaro-4.7-2013.02-01.tar.bz2, attempting MIRRORS if available
WARNING: Failed to fetch URL ftp://ftp.ossp.org/pkg/lib/uuid/uuid-1.6.2.tar.gz, attempting MIRRORS if available
WARNING: QA Issue: linux-mainline: Files/directories were installed but not shipped
 /lib/firmware/korg
 /lib/firmware/sb16
 /lib/firmware/korg/k1212.dsp
 /lib/firmware/sb16/alaw_main.csp
 /lib/firmware/sb16/mulaw_main.csp
 /lib/firmware/sb16/ima_adpcm_capture.csp
 /lib/firmware/sb16/ima_adpcm_init.csp
 /lib/firmware/sb16/ima_adpcm_playback.csp
NOTE: Tasks Summary: Attempted 907 tasks of which 249 didn't need to be rerun and all succeeded.

Where Are They?

The cross-compiler executables are located in the directory listed below.  The path to this directory may be different for your build, but will still be similar.

~/development/setup-scripts/build/tmp-angstrom_v2012_12-eglibc/sysroots/x86_64-linux/usr/bin/armv7a-vfp-neon-angstrom-linux-gnueabi

The contents in this directory are:

arm-angstrom-linux-gnueabi-addr2line
arm-angstrom-linux-gnueabi-ar
arm-angstrom-linux-gnueabi-as
arm-angstrom-linux-gnueabi-c++
arm-angstrom-linux-gnueabi-c++filt
arm-angstrom-linux-gnueabi-cpp
arm-angstrom-linux-gnueabi-elfedit
arm-angstrom-linux-gnueabi-g++
arm-angstrom-linux-gnueabi-gcc
arm-angstrom-linux-gnueabi-gcc-4.7.3
arm-angstrom-linux-gnueabi-gcc-ar
arm-angstrom-linux-gnueabi-gcc-nm
arm-angstrom-linux-gnueabi-gcc-ranlib
arm-angstrom-linux-gnueabi-gcov
arm-angstrom-linux-gnueabi-gprof
arm-angstrom-linux-gnueabi-ld
arm-angstrom-linux-gnueabi-ld.bfd
arm-angstrom-linux-gnueabi-nm
arm-angstrom-linux-gnueabi-objcopy
arm-angstrom-linux-gnueabi-objdump
arm-angstrom-linux-gnueabi-ranlib
arm-angstrom-linux-gnueabi-readelf
arm-angstrom-linux-gnueabi-size
arm-angstrom-linux-gnueabi-strings
arm-angstrom-linux-gnueabi-strip

Build a Simple App

To test that the cross-compiler is working, let’s build an incredibly simple “Hello World!” type application.  First, make a new directory from beneath your development directory (or whatever you called it) and name it, “howdy”

mkdir ~/development/howdy

Then cut and paste this text into a text editor and save it as howdy.c in your newly created directory:

#include <stdio.h>

int main(void)
{
   printf("Hey you!  Yes, you, the big nerd in front of this monitor!  This program worked.\n");
   return 0;
}

Next, update your path environment variable like so (you can make it permanent if you want by adding it to your .bashrc file.):

PATH=$PATH:~/development/setup-scripts/build/tmp-angstrom_v2012_12-eglibc/sysroots/x86_64-linux/usr/bin/armv7a-vfp-neon-angstrom-linux-gnueabi

Then compile your file to get a simple executable:

arm-angstrom-linux-gnueabi-gcc -o howdy howdy.c

You should see that a “hello” executable file is created.  One that you can’t run on your Linux build box, by the way, since it is for the BeagleBone.  Next, copy this file to your BeagleBone (using “scp” if you have a network connection is the easiest).  Try to run it and it should spit out the the correct text.  That’s it!  Here’s the “scp” command I used to copy my compiled file from my VM to my network connected BeagleBone whose IP address was 192.168.1.86:

scp howdy root@192.168.1.86:howdy

And here’s the output on the BeagleBone:

root@beaglebone:~# ls -al
total 32
drwxr-xr-x 3 root root 4096 Jul 14 02:09 .
drwxr-sr-x 3 root root 4096 Nov 21 2012 ..
-rw------- 1 root root 139 Jul 14 02:05 .bash_history
-rw-r--r-- 1 root root 20 Nov 22 2012 .bashrc
-rw------- 1 root root 623 Nov 22 2012 .viminfo
drwxr-xr-x 2 root root 4096 Jan 1 2000 Desktop
-rwxr-xr-x 1 root root 7866 Jul 14 02:06 howdy
root@beaglebone:~# ./howdy
Hey you! Yes, you, the big nerd in front of this monitor! This program worked.
root@beaglebone:~#

References

Much thanks to these two posts from where I regurgitated all this material.

Standard