cdtools, because who has just one project?



[Print This] By mjhammel ~ April 6th, 2015. Filed under: General, Linux, Raspberry Pi, Software Development.

When I was just getting started with embedded development I found many tutorials on how to perform cross compiles required setting up some shell functions and variables before working on builds.  This is a necessity for embedded work because the embedded build for the target platform won’t use the same toolchain and build libraries as the host platform.  Unfortunately, the setup for the various publicly available build systems were all different.  Angstrom/OpenEmbedded vs Yocto vs Buildroot vs Mentor Graphics vs whoever:   they all have their own setup.

Then I created my own build platform, PiBox.  I did this mostly to teach myself the whole platform bring-up process.  One of the side effects of this (along with having to build custom OPKG packages for PiBox) was the need to bounce from one build to another quickly.  I wanted them all to use the same navigation commands and automatically point where ever they needed to with environment variables.  Thus was born cdtools.

Use case

Let’s say I have a build system for the toolchain, kernel and root file system for my Raspbeery Pi (re: PiBox).  I do out-of-tree builds and packaging to avoid clutter when looking for updates with git status.  So while my source lives under …/raspberrypi/src my builds are under …/raspberrypi/bld and packages generated from the build are under …/raspberrypi/pkg.  So far so good.  That’s not overly complex.

But now I need to bounce over to a kernel tree and make some updates and test them.  The updates I end up with will be integrated back into the PiBox build but before that happens I want to test them in a kernel-only tree (remember: PiBox is a build of multiple components – it is, in fact, a metabuild that downloads components, unpacks them, patches them, builds them and packages them).  So how do I bounce over to the kernel-only tree and run tests and then bounce back to the PiBox tree to integrate the changes?

Here’s another scenario:  I have a slew of metabuilds that do nothing but download 3rd party utilities, configure them as needed for PiBox and them package them as OPKG packages.  These builds are then wrapped in another metabuild that builds all of those package-metabuilds in order to simplify creating a complete PiBox release!  This top level metabuild has dependencies on the package metabuilds so I may find myself performing the top level metabuild, finding a bug (due to an upstream change to the branch I’m using)  in a 3rd party utility, switching over to that package metabuild to fix, commit and push it, and then bouncing back to the top level metabuild to rebuild the packages again.

What I need are common navigation tools for all projects and a way to change that navigation based on which project I want work on right at this moment.  The way to do that is with shell variables.  But I need to do this without having all kinds of project-specific shell scripts that are all different.  There needs to be some consistency in how they are used.  I want to set the project name but use the same navigation to end up in source, build and package trees no matter which project I’m in.

cdtools

This shell script acts as a front end to a directory of configuration scripts, one for each project in which I do builds.  This script lives in ~/bin and sources any shell files found under ~/bin/env*.  In this way I separate builds for work from builds I do at home by placing the environment configurations in different directories, as in

~/bin/env.muse
~/bin/env.work

Whenever I add a new project (and this happens frequently for me) I just drop in a new shell file in on of the env subdirectories and then reload cdtools.

. ~/bin/cdtools

This makes it easy to make changes without having to logout and back in.  It also makes it easy to do within a screen session so I can test new or modified configurations under one screen window without affecting the other windows until I’m ready for the update.

shell functions

Each project has its own shell file which contains a project specific shell function that sets up environment variables and aliases.   The environment variables are always the same but point to different things.  Typically I use variables prefixed with GM_, though not always.

  • SRCTOP – top of all source trees, under which you find all project specific directories.
  • GM_HOME – top of the project directory, under which you find the project source, build and package trees, along with others as needed.
  • GM_WORK – a scratch pad area often common between projects.
  • GM_BUILD – the build directory for the project
  • GM_SRC – the source directory for the project
  • GM_PKG – the packaging directory for the project

Other variables can be set to update the executable PATH, library path, Java path, or anything else that’s needed.  As long as the same variables are used in each shell file (with unused variables ignored) then switching from one project to another will always properly setup the environment.

For the use case, the PiBox shell function is rpi and the Linux kernel shell function is kernel.  Typing either configures my environment to access, build and navigate their respective trees.

aliases

Remembering the environment variables is hard.  So is typing them.  So the shell function maps them to navigation aliases.  All navigation aliases are prefixed with cd, as in

  • cdt – top of all source trees, under which you find project directories
  • cdh – top of the project directory
  • cdx – source tree under the project directory
  • cdb – build tree under the project directory
  • cdp – package tree under the project directory
  • cda – archive directory, where a local copy of downloaded source is kept, to avoid having to redownload unless absolutely necessary (this includes git or mercurial managed trees, as needed)
  • cde – Extras directory

A special alias, cd?, is setup to call a project shell function that lists the aliases, important environment variable configuration and help in cloning and accessing remote trees.  All projects use the same aliases.  Additional aliases can be setup as long as the context of the alias is the same for each project.

help functions

Until you’ve used cdtools for awhile it can be hard to remember the aliases.  It’s even harder to remember the sometimes convoluted git or mercurial (or other source code manager system)  commands necessary to work with remote repositories.  A help function is included in each shell file.  This is always prefixed with “list”.  In our use case, the helper function is listrpi or listkernel.  But you don’t have to remember this because the alias cd? always points to the currently active projects help function.

$ cd?
raspberrypi Alias settings:
-----------------------------------------------------------------------------
cdt cd SRCTOP (/home/mjhammel/src/ximba)
cdh cd GM_HOME (/home/mjhammel/src/ximba/raspberrypi)
cdw cd GM_WORK (/home/mjhammel/src/ximba/work)
cdx cd GM_SRC (/home/mjhammel/src/ximba/raspberrypi/src)
cdb cd GM_BUILD (/home/mjhammel/src/ximba/raspberrypi/bld)
cda cd GM_ARCHIVE (/home/mjhammel/src/ximba/raspberrypi/archive)
cdp cd GM_PKG (/home/mjhammel/src/ximba/raspberrypi/pkg)
cde cd GM_EXTRAS (/home/mjhammel/src/ximba/raspberrypi/extras)
To checkout tree:
cdt
mkdir raspberrypi
cdh
git clone ssh://git@gitlab.com:pibox/pibox.git src

cdlist

So what happens when you have umpteen projects and you can’t remember which shell function sets up a particular project?  You can get a list of available functions and their descriptions using the cdlist function, which is included in the cdtools script.  The cdlist function browses all shell files and extracts the shell function name, a description and the name of the file its found in.  The description uses a tag in a shell comment, as in this example.

 # DESC: PiBox: Embedded environment for ARM-based system using buildroot

To search for this I can pass any string in the DESC field, such as PiBox:

 $ cdlist pibox
 ...
 pmsui PiBox Media Center UI (pms) [pmsui.sh]
 pmsuip PiBox Media Center UI (pms) [pmsuip.sh]
 psp PiBox psplash opkg build [psplash.sh]
 rpi PiBox: Embedded environment for ARM-based system using buildroot [raspberrypi]

All shell files with descriptions that include the string “pibox” (case insenstive) are listed.  Now I can see the shell function (rpi)  which is listed first and the file in which that function is defined (raspberrypi), which is listed at the end of the line inside brackets.

Example usage

First, load all cdtools scripts.

. ~/bin/cdtools

Now setup to work in the PiBox tree.  Then change to the source tree, see what’s there and then switch to the packaging tree and do the same.

 rpi
 cdx
 ls -C
 Makefile  README  config.mk  configs  docs  pkg  scripts  src  util.mk
 cdp
 ls -C
 firmware opkg rootfs.ext3
 kernel.img.3.10.y opkg.10 rpiToolchain-0.9.0_Fedora19_201409081418-1.x86_64.rpm
 kernel.img.3.14.y opkg.9 rpiToolchain-0.9.0_Fedora19_201409162158-1.x86_64.rpm
 mkinstall.sh pibox-0.9.0_Fedora19_201409081418-src.tar.gz staging.tar.gz
 mksd.sh pibox-0.9.0_Fedora19_201409081418.tar.gz

I can see that I have done packaging after multiple builds here.  Wait, let’s check what’s in the top level metabuild tree.

 meta
 cdx
 ls -C
 MakeChangelog.sh  build.sh  docs  functions.sh  src

Okay, that looks fine.  Now back to the PiBox tree.

 rpi
 cdx

Easy peasy.

Multiple versions of a single repo

A neat trick here is that if you format the shell function correctly you can append a number.  That number will allow you to have multiple versions of the same repo, as in this example.

 rpi
 rpi 2
 rpi 3

Checking the alias of the middle one, we see this.

raspberrypi Alias settings:
-----------------------------------------------------------------------------
cdt cd SRCTOP (/home/mjhammel/src/ximba)
cdh cd GM_HOME (/home/mjhammel/src/ximba/raspberrypi2)
cdw cd GM_WORK (/home/mjhammel/src/ximba/work)
cdx cd GM_SRC (/home/mjhammel/src/ximba/raspberrypi2/src)
cdb cd GM_BUILD (/home/mjhammel/src/ximba/raspberrypi2/bld)
cda cd GM_ARCHIVE (/home/mjhammel/src/ximba/raspberrypi2/archive)
cdp cd GM_PKG (/home/mjhammel/src/ximba/raspberrypi2/pkg)
cde cd GM_EXTRAS (/home/mjhammel/src/ximba/raspberrypi2/extras)
To checkout tree:
cdt
mkdir raspberrypi2
cdh
git clone ssh://git@gitlab.com:pibox/pibox.git src

Note the “2” in the directory paths.  I can now have multiple copies of the same repo.  And this trick can go further by using additional arguments to the shell function to embed different-yet-related repositories using a single shell function.

 bui fakekey
 bui network-config 2

And so forth.

Extras

The cdtools script currently includes support for color coding of the cdlist using ASCII escape sequences.  This works in most terminal types.  More importantly, there is no reason you can’t extend cdtools to support additional functionality that you want to embed in your own set of shell files.

Consistency of practice

What you should get out of this is the consistency of use when moving from one project to another.  How I build projects can vary greatly: PiBox uses GNU Make while one of its packages (launcher, for example) uses Autoconf.  This happens all the time because when doing a system build (a build of an OS plus utilities plus custom kernels plus custom apps, etc.) you may depend on many 3rd party source trees with a variety of build patterns.  But that doesn’t matter.  What matters is can I build it quickly by bouncing to it and having pretty much everything I need in place when I do.

With cdtools, the answer to that question is a resounding YES.

Related posts

Networking with QEMU and KVM speedup



[Print This] By mjhammel ~ April 6th, 2015. Filed under: Fedora, Hardware, Linux, virtualization.

Networking in QEMU

I’ve recently had a need to test UEFI booting for a disk image.  I stumbled upon OVMF, a bios that will handle UEFI booting.  It works pretty well but doesn’t remember its config sometimes.  But that’s not why I’m writing this.

In my disk image I need dual network interfaces that are on specific networks.  I don’t need tap interfaces.  I can live with the slower user-mode interfaces.  But I needed to put them on separate networks.  So I did this.

-netdev user,id=user0,dhcpstart=192.168.20.20,hostfwd=tcp::5555-:22 -device e1000,netdev=user0
-netdev user,id=user1,dhcpstart=192.168.50.50,hostfwd=tcp::5556-:22 -device e1000,netdev=user1

I googled till I was blue to figure out how to do this.  You can set the dhcpstart address but qemu_system_x86-64 spews the following message:

device 'user' could not be initialized

It doesn’t say why, but after fiddling with it for a long time it became obvious that this was a configuration error.  There just doesn’t seem to be any explanation as to what configuration is wrong.

Finally I found a site with an example.  The key appears to be that you have to include the host’s network address on the network that interface will be in.  Without this specification there is no mapping of the guest interface to a host interface.  Here is what the correct configuration should look like.

-netdev user,id=user0,net=192.168.20.0/24,dhcpstart=192.168.20.20,hostfwd=tcp::5555-:22 -device e1000,netdev=user0
-netdev user,id=user1,net=192.168.50.0/24,dhcpstart=192.168.50.50,hostfwd=tcp::5556-:22 -device e1000,netdev=user1

Now both interfaces will come up on the assigned networks.  The hostfwd option is used to map host ports to guest ports, with the larger number (the first one) mapping to the guest’s port.  Here I’m mapping ports to the ssh port on the guest.  Since I’m using user-mode networking this is required otherwise the default firewall in the guest prevents me from accessing it from the host.

KVM Speedup

Another thing I discovered when running qemu_system_x86_64 is that it was slow on my Fedora 21 box.  The reason is simple:  you can’t run this command manually with KVM speedups (which are significant) if you’re already running libvirtd.  You can go through the hoops of tying the two together, but for a quick solution to run a qemu session with full virtualization to get much better performance without dealing with libvirtd just shutoff libvirtd.

sudo service libvirtd stop

That’s using archaic service format instead of the ugly sysctl interface (that you very little, systemd) but it works.  Again, this is for Fedora.  Your distro mileage will vary.

Related posts

Migrating from gitorious – it sucks.



[Print This] By mjhammel ~ April 1st, 2015. Filed under: General.

I can no longer push to gitorious so I tried to run the import of all my repos (and there are alot) into GitHub. This failed miserably. Only two of 34 repos were imported. Some of the repos actually got created on GitLab but there is nothing in them. So I tried to import one of them manually. It’s been running for about 20 minutes now and nothing is happening. No errors, no messages, no nothing. Just a spinning wheel.

I dug around and found the process for migrating a repo, be it to GitLab or GitHub or whatever.  It requires manual creation of each repository.  While gitorious allowed me to have different projects with different repos under them, GitHub does not.  At least I don’t see a way to do that.  GitLab does, but they call them groups.  The import appears to have created the groups but that’s all it did.

So I guess I can try doing this manually under each group, after manually creating each repo.  *sigh*  This is a strong argument for running my own git server and gitweb.

Related posts

apache 2.4/Fedora quirk in welcome.conf



[Print This] By mjhammel ~ March 10th, 2015. Filed under: Fedora, Linux, Software Development, Ubuntu.

I was trying to set up a directory index via a named vhost under Apache today.  Typically all you need to do for this scenario is something like this in an apache conf file.

<VirtualHost *:80>
ServerName files.gfxmuse.org
DocumentRoot /home/httpd/files
<directory "="" home="" <span="" class="hiddenSpellError" pre="" data-mce-bogus="1">httpd/files">
Options +Indexes
</Directory>

Then restart apache and make sure the server name resolves somehow (DNS or /etc/hosts) and you’re good to go.

But with Apache 2.4 something changed.  This doesn’t work.  You get the nice welcome page that says the web server is running but you don’t get the directory index.  I found a solution online that hid the solution behind a requirement to “like” the answer but it became very clear what the problem was: the welcome.conf configuration file overrides my directory index.

In welcome.conf you find this bit of code.

<LocationMatch "^/+$">
Options -Indexes
ErrorDocument 403 /.noindex.html
</LocationMatch>

That Options -Index line is the problem.  The easiest solution, in this case where all I want is a web server that shows a directly listing, is to comment out that whole block of code.  Restarting the web server and accessing my file server works as expected now.

I had this problem on Fedora 21.  I’ve seen others have it with Ubuntu.  So the source of welcome.conf content may be the apache project itself.  In other words, this solution may be applicable to any distribution using Apache 2.4 or later.

Related posts

PiBox update: user and developer guides



[Print This] By mjhammel ~ March 2nd, 2015. Filed under: PiBox, Raspberry Pi.

I was job hunting from late November to late January.  Since few of the people needed to do interviews tend to be available during the 12th month of the year, I was able to devote a lot of time working on PiBox development.  However, once the new year rolled in I was bouncing around from office to office in my finest nerd sweaters and cleanest hiking shoes.   Fortunately, no one was interested in my fashion sense.  But it required me to do a bit of travelling and lots of phone chatting, which played havoc with the concentration required to do development work.  So little PiBox work got done.

Fortunately all that ended in mid February when HGST decided I was least offensive candidate that week.    But it takes awhile to get back into a project like PiBox.  So I thought I’d start slowly by finally getting around to writing some end user and developer documentation on the wiki.

I managed to complete a user guide to explain how to use the media systems.  Turns out it’s pretty easy, which is a good thing.  The developer document was a bit harder since I wasn’t sure if I wanted to explain how to use and modify the development platform’s build system or explain how to write apps for the media systems.  Turns out I did a little of both.  That’s a good thing too.  I think.

What’s still missing is the protocol document explaining how messages are passed around the media system components.  I’ve got some diagrams up and an idea of what I need to say.  I just need to get some time to finish it up.

And then I can get back to my issues list.  So much to do.  So little time.  Anyone wanna help?

Related posts

Debian/Ubuntu debootstrap images for VirtualBox



[Print This] By mjhammel ~ February 27th, 2015. Filed under: Fedora, Linux, Software Development, Ubuntu, virtualization.

I’ve been working on building custom Debian and Ubuntu distributions for use under VirtualBox.   One advantage that both have over Fedora is debootstrap.  This tool allows you to create a default rootfs from pre-compiled packages inside a directory.  You can then chroot into that directory to install the kernel image, extra packages and do any additional customizations.

Where this gets really cool is using debootstrap with a few additional tools on an image file.  The process is fairly simple.  First you create an image file using dd and add a partition table for two partitions, boot and rootfs, using parted. Loop mount (kpartx) and bind mount boot under rootfs.  Format the partitions (ext3.mkfs).  Now you’re ready to install the rootfs using debootstrap.  Once installed, chroot into that directory and run customization scripts.  Finally exit the chroot, unmount and install a bootloader such as grub.  That creates the raw disk image.  You can then use qemu-image to convert that to various VM image formats such as those for VirtualBox, Xen and KVM.

I handle this using a front end script that runs 7 steps, most of which are outside of the chroot but a few that are in it.  The seven steps are

  1. Create image file
  2. Create partition table
  3. Loop mount and bind mount
  4. Install rootfs with debootstrap
  5. Copy in chroot scripts and data files
  6. Chroot and run those script and unmount
  7. Convert to VM image

These seven are pretty common for either Debian or Ubuntu.  There are small variations, such as what you include wiht debootstrap and perhaps how you need to do your loop mounts.  The differences come from the chroot scripts and data files.  There is one script that sets up the chroot environment as necessary to perform additional package installations.  Setup can include network interfaces, setting the locale and installing prerequisites necessary to do other installations.

Within the chroot script is the installation of the Linux kernel image.  This gets installed under /boot, which was bind mounted outside of the chroot so we have separate boot and rootfs partitions when we boot the image.  What’s interesting is that most information you find online about this process assumes you’re building debian or ubuntu on debian or ubuntu.  But I’m not.  I’m on Fedora.

This means that when you get to installing grub the instructions you find don’t always match what you need to do.  Fedora 21 uses grub2.  Debian uses grub or grub2 and Ubuntu has grub.  Does it matter?  Turns out it doesn’t.  All that matters is that the grub from your distro is installed to /boot on the image and the MBR of the image file points to it.  That way qemu-image can create an appropriate image for your VM.

For bonus points the use of qemu-image allows you to convert this raw disk image into a variety of VM image formats.  So a single image build is quickly converted to the VM environment of choice.

In the end I found that testing the VM image under VirtualBox was pretty easy, including getting the guest utilities installed as part of a firstboot-process.  Those utils need to rebuild kernel modules and you can’t do that from within the chroot easily.  It’s easier to just create a firstboot script hat installs the utils and rebuilds the modules andthat gets run the first time the image boots in the VM and then removes itself aftward.

So now I can quickly bring up a a Debian VM.  The whole process takes about 15 minutes unattended.  The only problem is it’s not Fedora.  If I could do this under Fedora or CentOS, I’d be sooo happy.

Related posts

Raspberry Pi: Kodi vs PiBox use case comparison



[Print This] By mjhammel ~ February 6th, 2015. Filed under: Hardware, Linux, Movies, PiBox, Raspberry Pi, Software Development, XBMC.
PiBox is part of miot.

PiBox is a member of miot, which is a brand I’m using for a variety of solutions I’m working on for the Raspberry Pi. “miot” stands for “My Internet Of Things”.

I’ve been working on PiBox now for a couple of years.  The immediate goal is a consumer-oriented device for video playback while not connected to the Internet.  In other words, play movies while camping.  I have longer term goals for it, but that’s where the project is aimed currently.

At home I used to use XBMC (now known as Kodi) on a big-metal server sitting behind my big screen TV for media services.  It’s primary use was to play the 500+ movies I’ve ripped to ISO images.  That machine hasn’t run in quite some time as it got quite hot behind the TV and I think it basically died.  Instead, we have a Roku hooked up to that TV and we stream Hulu, Netflix and Amazon.   The problem with the Roku (and other streaming boxes and sticks) is that none seems to support playback of locally stored videos.  So I still have a use for XBMC (re: Kodi) type software.

I could use PiBox for this but because the omxplayer doesn’t support ISO images I would have to rip all my ISO’s into MP4s.  That’s doable, but my wife likes to have the DVD menu system and the ability to add and remove subtitles.  So MP4s by themselves are not sufficient for this use case, at least not without external subtitles ripped separately.

So I installed OpenELEC on an SD card and brought it up on a Raspberry Pi Model B+ with a wifi dongle.  Setup was simple enough and I was able to connect to my NFS export video collection quite easily.  They even do an NFS query something akin to an SMB query.  I didn’t know you could do that.  Anyway, it automatically found my exports and I was able to configure them without any fuss.  I then did a library update and had all the posters I needed, though some are incorrect.  Those can be fixed later if necessary.  The update found all the ISO images plus the mp4s, since both are on the NFS exports.

I then tried to play a movie ripped as an ISO.   The Pi was not overclocked and loading the initial videos leading to the menu took quite some time.  Eventually the loaded and played, albeit with lots of jumpy behaviour.  I didn’t have audio hooked up at the time.  I tried to jump the DVD menu but this never displayed correctly.  I was unable to play the movie.

Next I tried overclocking the Pi.  The lead-in videos didn’t play and I was only able to see the text for “Play Movie’ from the DVD menu.  I managed to start the movie and it played modestly well but still with too many freezes and jumps.  Due to the nature of the video playback I never tried testing the audio playback.

What this tells me is that the ISO use case is not well supported on the Pi.  At least not the Model B+.  The Model 2 might fair better.  I don’t have one yet and won’t be getting one for a while due to changing jobs.  It also tells me that the trailer use case – the use case for PiBox – would be much simpler to use if I just used PiBox and ripped everything to MP4s.  However, I will have to add configuration support for using separately ripped subtitle files.  The VideoFE app, which wraps omxplayer, doesn’t have this yet though omxplayer apparently supports it.

So my design of PiBox with the specific use case of MP4s seems to be a better solution than Kodi if all you want to do is playback your locally ripped videos on a Raspberry Pi.  That’s good to know.  I like Kodi and use it on my desktop.  But it’s not necessarily the right app for a device like the Pi given all use cases.  As a side note, I designed the VideoFE to use whatever playback tool you choose, so it could be used on my desktop as well.  I would just swap out omxplayer with xine or mplayer.  I haven’t tried that yet, but intend to before too long.

 

Related posts

Life update: new gig @ HGST



[Print This] By mjhammel ~ February 6th, 2015. Filed under: General, Personal.

Last November I left CEI where I had worked for 8 1/2 years.  I had plenty saved up to cover me to the next gig.  So out went the resumes into the void and we waited for the holidays to pass to get an idea of how things have changed in the techy job market.

Turns out they’ve changed a lot.  I got 8 hits by the end of first full week of January.  Four of those went the distance with 2 offers, 1 offer in progress and 1 that eventually bowed out. All of these were remote, either in Seattle or Portland or work-from-home.  A few others interviewed me but then disappeared.  Only Amazon interviewed me and then turned around and rejected me right away.  That’s the third time they’ve done that so I think I’m done interviewing with Amazon.   They don’t do any real embedded work anyway (their Lab126 subsidiary does, but not Amazon itself).

hgstWhile waiting for those offers to be finalized HGST swooped in at the last second.  They contacted me, set up an interview, brought me in and made an offer in less than a week.  HGST has a relatively new office here in Colorado Springs so I wouldn’t have to relo.   They also made a terrific offer, better than any of the others.  And the office is actually closer than CEI.  Considering my old commute was about 6 minutes, I found it surprising I’d get an offer from someone even closer to home.

We had been excited by the idea of moving to the Pacific Northwest.  We’ve been there and really loved it, despite what everyone says about it raining or being overcast all the time.  Unfortunately, the timing is wrong.  My house won’t sell for what I bought it for so I gotta wait that out for at least a couple more years.  Economic changes come slowly to Colorado Springs.  It takes a long time to drop down like everyone else, but then it takes a lot longer to rebound after everyone else has.   And if we wait a couple more years I may just stay here and buy a vacation spot somewhere else.  It’s hard to leave the mountains behind too.

So I’m happy to announce that I’ve accepted the HGST gig.  It’s a storage product group and I’ll be working with firmware – bootloaders, Linux distro, customization and product apps.  Cool beans, I say.  My luck never seems to run out, it seems.  My wife and I are planning to build out the basement (hello, media room), make major updates to the landscaping and settle in for awhile.  If we need to get away, we’ve still got the Pod.  And DisneyWorld.

Life is good.

Related posts

git changelog generation: one liner



[Print This] By mjhammel ~ October 27th, 2013. Filed under: General.

Needed some place to note this short one-liner for generating a changelog between version releases with git.

git log v0.5...v0.6 --pretty=format:'%s'

The output looks something like this.

RM #225: Fix xcc to build when gmp is already installed. This is a back-rev'd patch from Crosstool-NG that required no changes so xcc would be on Fedora 19.
RM #225: Disable XBMC for dev platform build except for tinyxml.
RM #225: crtmpserver depends on tinyxml from XBMC package added by PiBox to Buildroot.

This works as long as you've tagged your tree.  To tag the tree, use an annotated tag and then push to the origin to share it.

git tag -a v0.1 -m"Why did I do this?"
git push origin --tags

Tha'ts it.

Related posts

DisplayLink vs TFT resistive touch screen for PiBox



[Print This] By mjhammel ~ August 21st, 2013. Filed under: Hardware, Raspberry Pi.

Okay, so working alone you often find yourself falling behind the curve.  I just got my DisplayLink working the other night and today I found this little device – a 2.8" TFT resistive touch screen for the Pi

touchdisplay

This is a much better solution than the DisplayLink for the trailer.  But if I spend any more money getting this thing working I may have to sell one of the dogs.  Eeek.

I'm going to keep going with what I have and then return to the touch screen option a little later.  I need to get to a fully functioning prototype before I go to cost -reduction and feature-enhancement mode.

You can buy this display via eBay.uk.  It's about $32US, plus shipping.  There is a discussion topic about it on the Raspberry Pi forums for the original version, which has since been updated.

Related posts