Planet GRLUG

June 06, 2017

Whitemice Consulting

LDAP Search For Object By SID

All the interesting objects in an Active Directory DSA have an objectSID which is used throughout the Windows subsystems as the reference for the object. When using a Samba4 (or later) domain controller it is possible to simply query for an object by its SID, as one would expect - like "(&(objectSID=S-1-...))". However, when using a Microsoft DC searching for an object by its SID is not as straight-forward; attempting to do so will only result in an invalid search filter error. Active Directory stores the objectSID as a binary value and one needs to search for it as such. Fortunately converting the text string SID value to a hex string is easy: see the guid2hex(text_sid) below.

import ldap
import ldap.sasl
import ldaphelper

PDC_LDAP_URI = 'ldap://'
OBJECT_SID = 'S-1-5-21-2037442776-3290224752-88127236-1874'
LDAP_ROOT_DN = 'DC=example,DC=com'

def guid2hex(text_sid):
    """convert the text string SID to a hex encoded string"""
    s = ['\\{:02X}'.format(ord(x)) for x in text_sid]
    return ''.join(s)

def get_ldap_results(result):
    return ldaphelper.get_search_results(result)

if __name__ == '__main__':

    pdc = ldap.initialize(PDC_LDAP_URI)
    pdc.sasl_interactive_bind_s("", ldap.sasl.gssapi())
    result = pdc.search_s(
        '(&(objectSID={0}))'.format(guid2hex(OBJECT_SID), ),
        [ '*', ]
    for obj in [x for x in get_ldap_results(result) if x.get_dn()]:
        """filter out objects lacking a DN - they are LDAP referrals"""
        print('DN: {0}'.format(obj.get_dn(), ))


by whitemice at June 06, 2017 12:11 AM

April 08, 2017

As it were ...

Why I no longer hate GoDaddy

There was a time when I said “never GoDaddy”. I turned down contracts when the client wanted to be hosted on GoDaddy, and wouldn’t budge. Over the last few years my attitude has changed pretty dramatically. I’m happy to work with GoDaddy now, and I like what they’re doing as a company.

Recently a friend tweeted this:

That is absolutely a fair question, and I think one that deserves a better answer than a tweet back, so this post is intended to be that answer.

Why I Didn’t Like GoDaddy

My primary reason was their choice to use sex as a marketing tool. Every commercial made me cringe. I felt so sad that NASCAR’s first serious female contender was cast as someone sexy rather than someone with amazing accomplishments. There was so much opportunity there to inspire young women and girls with the idea that they can break cultural norms.

A secondary reason was the lifestyle of the owner. He simply made choices I don’t like. Lots of people do, and that’s fine, but I made the choice not to use his product.

There were also some tech issues I didn’t like.  For a long time you couldn’t get shell for example. That annoyed me like crazy.

Lastly, they were the biggest player. I always root for the underdog.

What Changed

The real change came when key people inside GoDaddy decided the company was doing harmful things, and decided to do something about it. The owner sold the company and took a smaller and smaller role in controlling the company until he was simply gone.

At that point the opportunity existed to take a higher road, and they did it. The sex came out of the commercials. There are now more women than men in positions of authority inside the company.

In general things have really turned around.

What Doesn’t Matter

I recently heard someone bad mouth GoDaddy, and then someone else jump in and say “How can you hate GoDaddy?  Mendel Kurland is such a cool guy!” For the unaware, Mendel works there. And he is a cool guy, I like him a lot. I have other friends that work there too.

None of that matters. My beef wasn’t with individual people there, but corporate direction.

So Everything’s Perfect?

No. There are still things I don’t like about GoDaddy. But those things are in the same class as things I don’t like about every host as well. They’re not using protocol X, or they meddle too much in the site creation, or whatever. They’re not anything that I would feel like I need to apologize to my daughter for.

In Summary

In the past I’ve been vocal about “never GoDaddy”. I’m not that way anymore.

by topher at April 08, 2017 10:13 PM

March 07, 2017

Whitemice Consulting

KDC reply did not match expectations while getting initial credentials

Occasionally one gets reminded of something old.

[root@NAS04256 ~]# kinit
Password for adam@Example.Com: 
kinit: KDC reply did not match expectations while getting initial credentials


[root@NAS04256 ~]# kinit adam@EXAMPLE.COM
Password for adam@EXAMPLE.COM:
[root@NAS04256 ~]# 

In some cases the case of the realm name matters.

by whitemice at March 07, 2017 02:18 PM

February 09, 2017

Whitemice Consulting

The BOM Squad

So you have a lovely LDIF file of Active Directory schema that you want to import using the ldbmodify tool provided with Samba4... but when you attempt the import it fails with the error:

Error: First line of ldif must be a dn not 'dn'
Modified 0 records with 0 failures

Eh? @&^$*&;@&^@! It does start with a dn: attribute it is an LDIF file!

Once you cool down you look at the file using od, just in case, and you see:

0000000   o   ;   ?   d   n   :  sp   c   n   =   H   o   r   d   e   -

The first line does not actually begin with "dn:" - it starts with the "o;?". You've been bitten by the BOM! But even opening the file in vi you cannot see the BOM because every tool knows about the BOM and deals with it - with the exception of anything LDIF related.

The fix is to break out dusty old sed and remove the BOM -

sed -e '1s/^\xef\xbb\xbf//' horde-person.ldf  > nobom.ldf

And double checking it with od again:

0000000   d   n   :  sp   c   n   =   H   o   r   d   e   -   A   g   o

The file now actually starts with a "dn" attribute!

by whitemice at February 09, 2017 12:09 PM

Installation & Initialization of PostGIS

Distribution: CentOS 6.x / RHEL 6.x

If you already have a current version of PostgreSQL server installed on your server from the PGDG repository you should skip these first two steps.

Enable PGDG repository

curl -O
rpm -ivh pgdg-centos93-9.3-1.noarch.rpm

Disable all PostgreSQL packages from the distribution repositories. This involves editing the /etc/yum.repos.d/CentOS-Base.repo file. Add the line "exclude=postgresql*" to both the "[base]" and "[updates]" stanzas. If you skip this step everything will appear to work - but in the future a yum update may break your system.

Install PostrgreSQL Server

yum install postgresql93-server

Once installed you need to initialize and start the PostgreSQL instance

service postgresql-9.3 initdb
service postgresql-9.3 start

If you wish the PostgreSQL instance to start with the system at book use chkconfig to enable it for the current runlevel.

chkconfig postgresql-9.3 on

The default data directory for this instance of PostgreSQL will be "/var/lib/pgsql/9.3/data". Note: that this path is versioned - this prevents the installation of a downlevel or uplevel PostgreSQL package destroying your database if you do so accidentally or forget to follow the appropriate version migration procedures. Most documentation will assume a data directory like "/var/lib/postgresql" [notably unversioned]; simply keep in mind that you always need to contextualize the paths used in documentation to your site's packaging and provisioning. Enable EPEL Repository

The EPEL repository provides a variety of the dependencies of the PostGIS packages provided by the PGDG repository.

curl -O
rpm -Uvh epel-release-6-8.noarch.rpm

Installing PostGIS

The PGDG package form PostGIS should now install without errors.

yum install postgis2_93

If you do not have EPEL successfully enables when you attempt to install the PGDG PostGIS packages you will see dependency errors.

--->; Package postgis2_93-client.x86_64 0:2.1.1-1.rhel6 will be installed
--> Processing Dependency: for package: postgis2_93-client-2.1.1-1.rhel6.x86_64
--> Finished Dependency Resolution
Error: Package: gdal-libs-1.9.2-4.el6.x86_64 (pgdg93)
Error: Package: gdal-libs-1.9.2-4.el6.x86_64 (pgdg93)
Error: Package: gdal-libs-1.9.2-4.el6.x86_64 (pgdg93)

Initializing PostGIS

The template database "template_postgis" is expected to exist by many PostGIS applications; but this database is not created automatically.

su - postgres
createdb -E UTF8 -T template0 template_postgis
-- ... See the following note about enabling plpgsql ...
psql template_postgis
psql -d template_postgis -f /usr/pgsql-9.3/share/contrib/postgis-2.1/postgis.sql
psql -d template_postgis -f /usr/pgsql-9.3/share/contrib/postgis-2.1/spatial_ref_sys.sql 

Using the PGDG packages the PostgreSQL plpgsql embedded language, frequently used to develop stored procedures, is enabled in the template0 database from which the template_postgis database is derived. If you are attempting to use other PostgreSQL packages, or have built PostgreSQL from source [are you crazy?], you will need to ensure that this language is enabled in your template_postgis database before importing the scheme - to do so run the following command immediately after the "createdb" command. If you see the error stating the language is already enabled you are good to go, otherwise you should see a message stating the language was enabled. If creating the language fails for any other reason than already being enabled you must resolve that issue before proceeding to install your GIS applications.

$ createlang -d template_postgis plpgsql
createlang: language "plpgsql" is already installed in database "template_postgis"


PostGIS is now enabled in your PostgreSQL instance and you can use and/or develop exciting new GIS & geographic applications.

by whitemice at February 09, 2017 11:43 AM

February 03, 2017

Whitemice Consulting

Unknown Protocol Drops

I've seen this one a few times and it is always momentarily confusing: on an interface on a Cisco router there is a rather high number of "unknown protocol drops". What protocol could that be?! Is it some type of hack attempt? Ambitious if they are shaping there own raw packets onto the wire. But, no, the explanation is the much less exciting, and typical, lazy ape kind of error.

  5 minute input rate 2,586,000 bits/sec, 652 packets/sec
  5 minute output rate 2,079,000 bits/sec, 691 packets/sec
     366,895,050 packets input, 3,977,644,910 bytes
     Received 15,91,926 broadcasts (11,358 IP multicasts)
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog
     0 input packets with dribble condition detected
     401,139,438 packets output, 2,385,281,473 bytes, 0 underruns
     0 output errors, 0 collisions, 3 interface resets
     97,481 unknown protocol drops  <<<<<<<<<<<<<<
     0 babbles, 0 late collision, 0 deferred

This is probably the result of CDP (Cisco Discovery Protocol) being enabled on one interface on the network and disabled in this interface. CDP is the unknown protocol. CDP is a proprietary Data Link layer protocol, that if enabled, sends an announcement out the interface every 60 seconds. If the receiving end gets the CDP packet and has "no cdp enable" in the interface configuration - those announcements count as "unknown protocol drops". The solution is to make the CDP settings, enabled or disabled, consistent on every device in the interface's scope.

by whitemice at February 03, 2017 06:32 PM

Screen Capture & Recording in GNOME3

GNOME3, aka GNOME Shell, provides a comprehensive set of hot-keys for capturing images from your screen as well as recording your desktop session. These tools are priceless for producing documentation and reporting bugs; recording your interaction with an application is much easier than describing it.

  • Alt + Print Screen : Capture the current window to a file
  • Ctrl + Alt + Print Screen : Capture the current window to the cut/paste buffer
  • Shift + Print Screen : Capture a selected region of the screen to a file
  • Ctrl + Shift + Print Screen : Capture a selected region of the screen to the cut/paste buffer
  • Print Screen : Capture the entire screen to a file
  • Ctrl + Print Screen : Capture the entire screen to the cut/paste buffer
  • Ctrl + Alt + Shift + R : Toggle screencast recording on and off.

Recorded video is in WebM format (VP8 codec, 25fps). Videos are saved to the ~/Videos folder and image files are saved in PNG format into the ~/Pictures folder. When screencast recording is enabled there will be a red recording indicator in the bottom right of the screen, this indicator will disappear one screencasting is toggled off again.

by whitemice at February 03, 2017 06:29 PM

Converting a QEMU Image to a VirtualBox VDI

I use VirtualBox for hosting virtual machines on my laptop and received a Windows 2008R2 server image from a consultant as a compressed QEMU image. So how to convert the QEMU image to a VirtualBox VDI image?

Step#1: Convert QEMU image to raw image.

Starting with the file WindowsServer1-compressed.img (size: 5,172,887,552)

Convert the QEMU image to a raw/dd image using the qemu-img utility.

emu-img convert  WindowsServer1-compressed.img  -O raw  WindowsServer1.raw

I now have the file WindowsServer1.raw (size: 21,474,836,480)

Step#2: Convert the RAW image into a VDI image using the VBoxManage tool.

VBoxManage convertfromraw WindowsServer1.raw --format vdi  WindowsServer1.vdi
Converting from raw image file="WindowsServer1.raw" to file="WindowsServer1.vdi"...
Creating dynamic image with size 21474836480 bytes (20480MB)...

This takes a few minutes, but finally I have the file WindowsServer1.vdi (size: 14,591,983,616)

Step#3: Compact the image

Smaller images a better! It is likely the image is already compact; however this also doubles as an integrity check.

VBoxManage modifyhd WindowsServer1.vdi --compact

Sure enough the file is the same size as when we started (size: 14,591,983,616). Upside is the compact operation went through the entire image without any errors.

Step#4: Cleanup and make a working copy.

Now MAKE A COPY of that converted file and use that for testing. Set the original as immutable [chattr +i] to prevent that being used on accident. I do not want to waste time converting the original image again.

Throw away the intermediate raw image and compress the image we started with for archive purposes.

rm WindowsServer1.raw 
cp WindowsServer1.vdi WindowsServer1.SCRATCH.vdi 
sudo chattr +i WindowsServer1.vdi
bzip2 -9 WindowsServer1-compressed.img 

The files at the end:

File Size
WindowsServer1-compressed.img.bz2 5,102,043,940
WindowsServer1.SCRATCH.vdi 14,591,983,616
WindowsServer1.vdi 14,591,983,616


Generate a new UUID for the scratch image. This is necessary anytime a disk image is duplicated. Otherwise you risk errors like "Cannot register the hard disk '/archive/WindowsServer1.SCRATCH.vdi' {6ac7b91f-51b6-4e61-aa25-8815703fb4d7} because a hard disk '/archive/WindowsServer1.vdi' with UUID {6ac7b91f-51b6-4e61-aa25-8815703fb4d7} already exists" as you move images around.

VBoxManage internalcommands sethduuid WindowsServer1.SCRATCH.vdi
UUID changed to: ab9aa5e0-45e9-43eb-b235-218b6341aca9

Generating a unique UUID guarantees that VirtualBox is aware that these are distinct disk images.

Versions: VirtualBox 5.1.12, QEMU Tools 2.6.2. On openSUSE LEAP 42.2 the qemu-img utility is provided by the qemu-img package.

by whitemice at February 03, 2017 02:36 PM

January 24, 2017

Whitemice Consulting

XFS, inodes, & imaxpct

Attempting to create a file on a large XFS filesystem - and it fails with an exception indicating insufficient space! There is available blocks - df says so. HUh? While, unlike traditional UNIX filesystems, XFS doesn't suffer from the boring old issue of "inode exhaustion" it does have inode limits - based on a percentage of the filesystem size.

linux-yu4c:~ # xfs_info /mnt
meta-data=/dev/sdb1              isize=256    agcount=4, agsize=15262188 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=61048752, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=29808, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0

The key is that "imaxpct" value. In this example inode's are limited to 25% of the filesystems capacity. That is a lot of inodes! But some tools and distributions may default that percentage to some much lower value - like 5% or 10% (for what reason I don't know). This value can be determined at filesystem creation time using the "-i maxpct=nn" option or adjusted later using the xfs_growfs command's "-m nn" command. So if you have an XFS filesystem with available capacity that is telling you it is full: check your "imaxpct" value, then grow the inode percentage limit.

by whitemice at January 24, 2017 07:59 PM

Changing FAT Labels

I use a lot of SD cards and USB thumb-drives; when plugging in these devices automount in /media as either the file-system label (if set) or some arbitrary thing like /media/disk46. So how can one modify or set the label on an existing FAT filesystem? Easy as:

mlabel -i /dev/mmcblk0p1 -s ::WMMI06  
Volume has no label 
mlabel -i /dev/mmcblk0p1  ::WMMI06
mlabel -i /dev/mmcblk0p1 -s :: 
Volume label is WMMI06

mlabel -i /dev/sdb1 -s ::
Volume label is Cruzer
mlabel -i /dev/sdb1  ::DataCruzer
mlabel -i /dev/sdb1 -s ::
Volume label is DataCruzer (abbr=DATACRUZER )

mlabel is provided by the mtools package. Since we don't have a drive letter the "::" is used to defer to the actual device specified using the "-i" directive. The "-s" directive means show, otherwise the command attempts to set the label to the value immediately following (no whitespace!) the drive designation [default behavior is to set, not show].

by whitemice at January 24, 2017 07:51 PM

Deduplicating with group_by, func.min, and having

You have a text file with four million records and you want to load this data into a table in an SQLite database. But some of these records are duplicates (based on certain fields) and the file is not ordered. Due to the size of the data loading the entire file into memory doesn't work very well. And due to the number of records doing a check-at-insert when loading the data is also prohibitively slow. But what does work pretty well is just to load all the data and then deduplicate it. Having an auto-increment record id is what makes this possible.

class VendorSKU(scratch_base):
    __tablename__ = 'sku'
    id      = Column(Integer, primary_key=True, autoincrement=True)

Once all the data gets loaded into the table the deduplication is straight-forward using minimum and group by.

query = scratch.query(
    func.min( ),
    VendorCross.part ).filter(VendorCross.source == source).group_by(
        VendorCross.part ).having(
            func.count( > 1 )
counter = 0
for (id, sku, oem, part, ) in query.all( ):
    counter += 1
            VendorCross.source == source, 
            VendorCross.sku == sku,
            VendorCross.oem == oem,
            VendorCross.part == part,
   != id ) ).delete( )
    if not (counter % 1000):
        # Commit every 1,000 records, SQLite does not like big transactions

This incantation removes all the records from each group except for the one with the lowest id. The trick for good performance is to batch many deletes into each transaction - only commit every so many [in this case 1,000] groups processed; just also remember to commit at the end to catch the deletes from the last iteration.

by whitemice at January 24, 2017 07:45 PM

AIX Printer Migration

There are few things in IT more utterly and completely baffling than the AIX printer subsystem.  While powerful it accomplishes its task with more arcane syntax and scattered settings files than anything else I have encountered. So the day inevitably comes when you face the daunting task of copying/recreating several hundred print queues from some tired old RS/6000 we'll refer to as OLDHOST to a shiny new pSeries known here as NEWHOST.  [Did you know the bar Stellas in downtown Grand Rapids has more than 200 varieties of whiskey on their menu?  If you've dealt with AIX's printing subsystem you will understand the relevance.] To add to this Sisyphean task the configuration of those printers have been tweaked, twiddled and massaged individually for years - so that rules out the wonderful possibility of saying to some IT minion "make all these printers, set all the settings exactly the same" [thus convincing the poor sod to seek alternate employment, possibly as a bar-tender at the aforementioned Stellas].

Aside: Does IBM really truly not provide a migration technique?  No. Seriously, yeah. 

But I now present to you the following incantation [to use at your own risk]:

scp root@OLDHOST:/etc/qconfig /etc/qconfig
stopsrc -cg spooler
startsrc -g spooler
rsync --recursive --owner --group --perms \
  root@OLDHOST:/var/spool/lpd/pio/@local/custom/ \
rsync --recursive --owner --group --perms  \
  root@OLDHOST:/var/spool/lpd/pio/@local/dev/ \
rsync --recursive --owner --group --perms  \
  root@OLDHOST:/var/spool/lpd/pio/@local/ddi/ \
chmod 664 /var/spool/lpd/pio/@local/ddi/*
chmod 664 /var/spool/lpd/pio/@local/custom/*
enq -d
cd  /var/spool/lpd/pio/@local/custom
for FILE in `ls`
   /usr/lib/lpd/pio/etc/piodigest $FILE 
chown root:printq /var/spool/lpd/pio/@local/custom/*
chown root:printq /var/spool/lpd/pio/@local/ddi/*
chmod 664 /var/spool/lpd/pio/@local/ddi/*
chmod 664 /var/spool/lpd/pio/@local/custom/*

Execute this sequence on NEWHOST and the print queues and their configurations will be "migrated". 

NOTE#1: This depends on all those print queues being network attached printers.  If the system has direct attached printers that correspond to devices such as concentrators, lion boxes, serial ports, SCSI buses,.... then please do not do this, you are on your own.  Do not call me, we never talked about this.

NOTE#2: This will work once.  If you've then made changes to printer configuration or added/removed printers do not do it again.  If you want to do it again first delete ALL the printers on NEWHOST.  Then reboot, just to be safe.  At least stop and start the spooler service after deleting ALL the printer queues.

NOTE#3: I do not endorse, warranty, or stand behind this method of printer queue migration.  It is probably a bad idea.  But the entire printing subsystem in AIX is a bad idea, sooo.... If this does not work do not call me; we never talked about this.

by whitemice at January 24, 2017 11:46 AM

The source files could not be found.

I have several Windows 2012 VMs in a cloud environment and discovered I am unable to install certain roles / features. Attempting to do so fails with an "The source files could not be found." error. This somewhat misleading messages indicates Windows is looking for the OS install media. Most of the solutions on the Interwebz for working around this error describe how to set the server with an alternate path to the install media ... problem being that these VMs were created from a pre-activated OVF image and there is no install media available from the cloud's library.

Lacking install media the best solution is to set the server to skip the install media and grab the files from Windows Update.

  1. Run "gpedit.msc"
  2. "Local Computer Policy"
  3. "Administrative Templates"
  4. "System"
  5. Enable "Specify settings for optional component installation and component repair"
  6. Check the "Contact Windows Update directory to download repair content instead of Windows Server Update Services (WSUS)"

Due to technical limitations WSUS cannot be utilized for this purpose; which is sad given that there is a WSUS server sitting in the same cloud. :(

by whitemice at January 24, 2017 11:31 AM

January 05, 2017

As it were ...

A Grand Experiment

Well, it’s time for a new job. “What?!?!” you ask. “Didn’t you just get a new job a few months ago?”

Indeed I did. This last August I ended my time with Pippin and moved to Modern Tribe. For a variety of reasons it didn’t work out. No-one’s upset, I still love and respect them, they still like me, it just wasn’t what either of us expected.

So, on to the future.

The plan at this point is to merge my experience as a freelancer with Tanner Moushey’s company and his experience as a freelancer and form a new WordPress agency. We’re doing a short trial period first, just to make sure this is really what we want, but by summer we should have a new company brand etc.

The General Plan

Our goal is freedom, both for ourselves and the people who work for us. This means not being married to the job, or making the job super complicated. We’d like to stay small and flexible, and do relatively small projects. We’re not looking to be a VIP agency or anything.

How You Can Help

If you need any web dev help, let me know. Tell your friends etc. I’m back to taking contracts. This time we’re a team though, which makes for a lot more depth, stability, and security.

This feels so so good, the best I’ve felt about a job since the first time I went 100% freelance.

Thank you for your support.

by topher at January 05, 2017 05:22 PM

November 02, 2016

As it were ...

Building a custom Google Map

For about a year now I’ve had a Google map on showing pins of where my contributors are from. I’ve been using Maps Builder Pro from WordImpress. It’s an excellent plugin, and does many of the things I wanted, but not all of them. Here’s what I was after:

My contributors are a custom content type in WordPress, not just authors. Maps Builder Pro provides a search box in the admin of each contributor to search for a location on Google Maps. Then I simply click the location and it fills in a bunch of meta boxes with data like coordinates, city name, and some unique location data.

I wanted a plugin that would automatically go get all that data, organize people by location, grouping people who are from the same location, and put in one pin per location, with the bubbles showing all the people from that location.

The map I made with Maps Builder Pro let me do most of this, but manually.  I had to keep the map up to date each week, and I was terrible at that.

So I wanted a new plugin, but I dearly love the admin UI for gathering and storing data that Maps Builder Pro provides. So that plugin remains, and I’ll use it that way. I built a new plugin for rendering the map with my requirements.

What I learned

I started with a tutorial by a guy named Ian Wright. It’s excellent, as are all of his maps tutorials. I highly recommend them.

Data Organization

The pins and the contents of the pins are two different data sets in Javascript, and they’re related by order. So pin 1 pairs with content block 1, and pin 42 goes with content block 42.  This means you need to have a content block for every pin, even if it’s empty, so that the 42’s match up properly.


Ian’s tutorial uses bounding to set the zoom and center for the map. I didn’t understand that, so when I tried to change it, I failed terribly. Here’s what that all means.

When creating a pin we put in


which tells the map object the bounds of the pins on a map. Then we put in


which tells the map to zoom just the right amount so you can see all the pins, and center on the middle of them. This made it so that when I later tried to make a different center with setCenter() it didn’t work.

Additionally, when I removed the fitBounds() function the whole map broke. This is because you MUST use some sort of centering code, and I had neither fitBounds() nor setCenter().

The key was to have a setCenter() and NOT have a fitBounds(). Then I was able to easily have a setZoom as well.

Static Maps

I just found out that you can have the maps API return an image rather than an interactive map.  So you can programmatically make the map, but it loads as fast as an image.  If you don’t need interactivity then it’s a MUCH better way to go.  I’m thinking of putting a small map on each contributor’s page with a single pin, showing where they’re from. It would then link to a google map.

In Summary

I’ve heard a fair number of people whine about how terrible the Google Maps API is, but I really like it.  I don’t know Javascript, and I was able to easily adapt some tutorial code, read the docs to extend it, and make something really slick. I really recommend it.

by topher at November 02, 2016 02:44 AM

October 16, 2016

As it were ...

The Right Stuff

Recently a friend started working on a WordPress plugin. The plugin was scratching an itch, counting the words in a collection of posts and rendering the count in a widget, as an incentive to post regularly. In the process of building the plugin she tweeted quite a bit, about successes, struggles, and frustrations. At one point I sent her some encouragement:

She was right, I hadn’t seen her code.  I’d never seen any of her code. At that point I didn’t know if she could code at all. But I knew she was doing awesome. How?

I could tell from her tweets that she was struggling with things, doing research, overcoming those things, and moving on. Anyone who can complete that process is essentially unstoppable as a developer. That process also works in any other walk of life.

Do you have what it takes to be a WordPress developer? Or any kind of developer? Or anything else in life? If you can confront your struggles head on, find a solution, and move on, you will be unstoppable.

by topher at October 16, 2016 10:56 PM

October 03, 2016

Whitemice Consulting

Playing With Drive Images

I purchased a copy of Windows 10 on a USB thumbdrive. I chose to have media to have (a) a backup and (b) not to have to bother with downloading a massive image. Primarily this copy of Windows will be used in VirtualBox for testing, using Power Shell, and other tedious system administrivia. First thing when it arrived is I used dd to make a full image of thumbdrive so I could tuck it away in a safe place.

dd if=/dev/sde of=Windows10.Thumbdrive.20160918.dd bs=512

But now the trick is to take that raw image and convert it to a VMDK so that it can be attached to a virtual machine. The VBoxManage command provides this functionality:

VBoxManage internalcommands createrawvmdk -filename Windows10.vmdk -rawdisk Windows10.Thumbdrive.20160918.dd

Now I have a VMDK file. If you do this you will notice the VMDK file is small - it is essentially a pointer to the disk image; the purpose of the VMDK is to provide the meta-data necessary to make the hypervisor (in this case VirtualBox) happy. Upshot of that is that you cannot delete the dd image as it is part of your VMDK.

Note that this dd file is a complete disk image; including the partition table:

awilliam@beast01:/vms/ISOs> /usr/sbin/fdisk -l Windows10.Thumbdrive.20160918.dd
Disk Windows10.Thumbdrive.20160918.dd: 14.4 GiB, 15502147584 bytes, 30277632 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device                            Boot Start      End  Sectors  Size Id Type
Windows10.Thumbdrive.20160918.dd 1 *     2048 30277631 30275584 14.4G  c W95 FAT3

So if I wanted to mount that partition on the host operating system I can do that my calculating the offset and mounting through loopback. The offset to the start of the partition within the drive image is the start multiplied by the sector size: 512 * 2,048 = 1048576. The mount command provides support for offset mounting:

beast01:/vms/ISOs $ sudo mount -o loop,ro,offset=1048576 Windows10.Thumbdrive.20160918.dd /mnt
beast01:/vms/ISOs # ls /mnt
83561421-11f5-4e09-8a59-933aks71366.ini  boot     bootmgr.efi  setup.exe                  x64
autorun.inf                              bootmgr  efi          System Volume Information  x86
beast01:/vms/ISOs $ sudo umount /mnt

If all I wanted was the partition, and not the drive, the same offset logic could be used to lift the partition out of the image into a distinct file:

dd if=Windows10.Thumbdrive.20160918.dd of=Windows10.image bs=512 skip=2048

The "Windows10.image" file could be mounted via loopback without bothering with an offset. It might however be more difficult to get a virtual host to boot from a FAT partition that does not have a partition table.

by whitemice at October 03, 2016 10:43 AM

September 15, 2016

Whitemice Consulting


Determine the DATE of the first day of the current week.


Informix always treats Sunday as day 0 of the week. The WEEKDAY function returns the number of the day of the week as a value of 0 - 6 so subtracting the weekday from current day (TODAY) returns the DATE value of Sunday of the current week.

Determining HOURS between two DATETIME values.

It is all about the INTERVAL data type and its rather odd syntax.

SELECT mpr.person_id, mpr.cn_name, 
  ((SUM(out_time - in_time))::INTERVAL HOUR(9) TO HOUR) AS hours
FROM service_time_card stc
  INNER JOIN morrisonpersonr mpr ON (mpr.person_id = stc.technician_id)
WHERE mpr.person_id IN (SELECT person_id FROM branch_membership WHERE branch_code = 'TSC')
  AND in_time > (SELECT TODAY - (WEEKDAY(TODAY)) UNITS DAY FROM systables WHERE tabid=1)  

The "(9)" part of the expression INTERVAL HOUR(9) TO HOUR is key - it allocates lots of room for hours, otherwise any value of more than a trivial number of hours will cause the clearly correct by not helpful SQL -1265 error: "Overflow occurred on a datetime or interval operation". As, in my case I had a highest value of 6,483 hours I needed at least HOUR(4) TO HOUR to avoid the overflow error. HOUR(9) is the maximum - an expression of HOUR(10) results in an unhelpful generic SQL -201: "A syntax error has occurred.". On the other hand HOURS(9) is 114,155 years and some change, so... it is doubtful that is going to be a problem in most applications.

by whitemice at September 15, 2016 07:46 PM

August 08, 2016

As it were ...

July 28, 2016

As it were ...

A new job at Modern Tribe

I’m happy to announce that today is my last day at Sandhills Development (working on Easy Digital Downloads with Pippin), and that Monday will be my first day at Modern Tribe. In this post I hope to answer a few common questions.  🙂

Why? Didn’t you just get a new job?

I’ve been with Pippin for just over a year. I joined his team to be doing things other than development, things like documentation, community involvement etc. At the time I was coming off the high of promoting HeroPress and having a wonderful time Not Developing.

As it turns out, developing is what really excites me in my career. I simply never fell in love with writing docs the way I thought I would.

I’d like to be clear that Pippin is a wonderful boss and his company is a spectacular place to work. We’re parting on very good terms.

What will you be doing at Modern Tribe?

I applied for the job of Lead Developer. I’m not going to drop into that position immediately, that would be foolish until I know the culture and processes better. I’ll be doing whatever they tell me to do. 🙂 Developer is my primary purpose for being there though.

Still going to do HeroPress?

Yep, I do that in my spare time, and Modern Tribe doesn’t have a problem with that. Personally, many of the folks there are big fans of HeroPress.

Additionally, some exciting things are happening around the idea of expanding HeroPress a bit, more on that later.


by topher at July 28, 2016 11:44 AM

July 23, 2016

As it were ...

Dragons fly



Caught this little guy on the edge of the grill the other night.

by topher at July 23, 2016 09:12 PM

July 17, 2016

As it were ...

My Birth Day

In the months before my dad died we went through a lot of Stuff. Some of it was his, some was my moms, some from his parents and in-laws.

One of the boxes he showed me held a bunch of diaries from my maternal Grandmother. I never knew they existed, so I started looking through them until I came to 1971. I slowly flipped through until I came to July 17.  Here’s what I found:

It’s a treasure for me to be able to see her handwriting again, to read what she had to say to us, about me.

by topher at July 17, 2016 04:00 AM

July 03, 2016

As it were ...

Honey bee on Lavendar


The honey bees are really loving our Lavendar this year.


I managed to catch this one on mid flight.

by topher at July 03, 2016 01:54 PM

June 21, 2016

As it were ...

WordCamp Europe

A few months ago the owner of HeroPress sent me an email and said “I think you should go to WordCamp Europe for HeroPress, can I cover that for you?” I thought long and hard for about 4 seconds before saying yes!  We decided to throw in some extra so that my wife could go along, she’s as much as part of HeroPress as I am.

So now we’re off to Vienna, Austria for WordCamp! We’re super excited.  We’re flying Austrian Airlines, which doesn’t fly out of Grand Rapids, so today we drove to Chicago. Just as we were getting to the hotel in Chicago the battery light came on in the Jeep, and it started sounding odd.  When we got there and I opened the hood the alternator was smoking.

I called AAA, and for $40 I was able to get us to a cooler insurance plan that covers towing the Jeep all the way back to Grand Rapids at no extra cost. Now we’re looking at taking the train back to Grand Rapids once we get home from Austria.

Our trip will quite literally involve planes, trains, and automobiles.

by topher at June 21, 2016 12:58 AM

A new kind of post

I’m sure both of my regular readers have noticed a recent flurry of posts that are simply photos, and mostly flowers at that.  I’ve always enjoyed taking macro shots of flowers, and I’ve always wanted to post them easily to my blog rather than to some service. I finally spent the time to figure out the workflow for the WordPress android app, and now you’re getting a lot more photos.

I wish I felt the urge to blog more, there are lots of things I’d love to have logged here, but my heart just isn’t in it.  So except for some posts here in the next few days it’ll probably just be lots of photos for a while.

I hope you enjoy it.

by topher at June 21, 2016 12:43 AM

June 18, 2016

As it were ...

Wild Roses


Seen in the wilds of the target parking lot.

by topher at June 18, 2016 07:38 PM

June 16, 2016

As it were ...

Fairy umbrella


Saw several of these in the flower bed this morning after a rain.

by topher at June 16, 2016 02:57 PM

June 11, 2016

As it were ...

Purple spikes


There are tiny spider webs on this one too.

by topher at June 11, 2016 06:59 PM

June 10, 2016

As it were ...

June 06, 2016

As it were ...

Purple haze

White flower with three petals, purple fuzz in the center, and 4 little stalks in the center.

I love the purple fuzz in the middle of this flower.

by topher at June 06, 2016 05:56 PM

Purple Flower

I need to do a better job getting the names of flowers.

I need to do a better job getting the names of flowers.

by topher at June 06, 2016 02:45 PM

March 31, 2016

As it were ...

A day at Meijer Gardens

The other day my sister came to visit and brought her two daughters and our niece. We got together with my friend John and his wife and kids and went to Meijer Gardens. A good time was had by all, here are some pictures.

Common Butterwort, green spike with red fringes Sammy, looks like Linus Van Pelt North American Pitcher Plant statue of baboon baby. Unknown red flowers Monarch Butterfly chrysalis, green with gold flecks and black trim Pink flowers on arch Giant moth, bigger than my niece's hand Statue of little girl watering plants Two butterflies touching, close up Brown flowers from last fall Butterfly close up View upward from the floor of the glass house, plants everywhere. Da Vinci's Horse, a man's head goes up to the horse's knee Da Vinci's Horse up close A little stream in the Japanese garden Tulips! Chihuly sculpture Cartoonish statue titled Mad Mom Clouds in a wave formation

Also some video.

by topher at March 31, 2016 05:20 PM

March 27, 2016

As it were ...

Louis L’Amour

When I was in high school a friend suggested I try reading a Louis L’Amour book, and I didn’t enjoy it. A small group of people were struggling across the desert and then they reached the ocean and it was done. Bleh.

About a year later I was camping with a friend and he brought The Last of the Breed by L’Amour, and I read the first few pages one afternoon. That night after he hit the sack I read the next 200 pages. He let me borrow it and my whole family read it, and we read it to pieces. We bought him a nice hardcover copy in thanks. That’s when I fell in love with Louis L’Amour books.

Over the next 3 years or so I read everything he ever wrote. He had a contract to write 3 books per year, which is kind of crazy. Most of them were about 120-150 pages each, mostly about the American West in the 1800’s. This is what gave him a reputation as a western novelist. He had well over 100 novels released in his lifetime.

While writing all those short novels though, he was working on his long form novels, and those were my favorite. Only a couple were about the American West. The Last of the Breed is set in the 1980’s in Siberia. The Walking Drum is set in the 1100’s over most of Europe. Jubal Sackett is set in America in the early 1600’s and ranges from the Eas Coast to the other side of the Mississippi.

He also did an excellent series about the Sackett family. It starts in England in 1599 and over the next couple books moves to the East Coast of America, and even dips down into the Caribbean a bit. It swings through the 1700’s once, and then follows several brothers across the American West in the 1800’s. Most of these are pretty short, but a few are longer, and Jubal Sackett is very long.

Someone asked me for some favorites recently, and I’d like to say that I like them all because he’s an excellent writer. I’ve read other Western Novelists and found that I don’t really prefer the genre. L’Amour is just excellent. But here’s a list.

The Walking Drum is by far my favorite.

The Last of the Breed by far my second favorite, I did a review once long ago.

The Lonesome Gods is set in the American West, but it’s long form, and tons more history and politics of early California in it. Did you know Las Vegas was Yerba Buena first? Good herb sounds like a great name for a city in California.

I love the entire Sackett series, it’s just great.

West From Singapore, Yondering, and Beyond the Great Snow Mountains are all collections of short stories set in the early 20th century, in the South Pacific, and both Eastern and Western Asia . Many of these were inspired by L’Amour’s time in those locations at that time. There’s a strong feeling of Indiana Jones in here, and he wrote them decades before Indy came on the scene.

Sitka is set in Alaska, so it has that frontier feel, but it’s a different location.

The Ferguson Rifle is about the first quick-load rifle, and the impact it had on the West.

I’ll let it go at that, but keep in mind that just about everything he wrote is great. Wikipedia has a nice book list, look at the series, those are always a little better because he plans well.

by topher at March 27, 2016 08:13 PM

March 07, 2016

OpenGroupware (Legacy and Coils)

Task Retention / Auto-Archiving

The expectation is that users creating tasks will archive those tasks when then are either completed or rejected; this is the completion of the task work-flow. However it may be advantageous, at least for certain kinds of tasks, to ensure that tasks archive at some point even if the owner chooses to ignore them [archiving a task removes it from the executant's task list]. To facilitate auto-archival the configuration document named TaskRetention.yaml exists in project 7,000. For administrators this document should be available via WebDAV in the /dav/Administration folder.

The task retention document is a YAML dictionary relating task kind values to data retention rules. The key of the dictionary is a case-sensitive task kind string; the string generic corresponds to all tasks having a NULL kind. The value for each key is a dictionary supporting the following keys: - autoArchiveThreshold – The number of days, expressed as an integer, before the action will automatically archive the task from an rejected or completed state.

Service_Laptop_Update: {'autoArchiveThreshold': 3, }
PQIRTS.ISSUE:CP_ERROR: {'autoArchiveThreshold': 14, }
PQIRTS.ISSUE:DEFECTIVE_PART: {'autoArchiveThreshold': 14, }
PQIRTS.ISSUE:BAD_CROSS: {'autoArchiveThreshold': 14, }
WEBSITE_ENH: {'autoArchiveThreshold': 60, }
PARTS.XREFR.ADD: {'autoArchiveThreshold': 14, }
generic: {'autoArchiveThreshold': 90, }

Text 1: Example TaskRetention.yaml document. In this document the generic kind establishes a 90 day rule for automatically archiving tasks with a NULL kind. Other key values relate to tasks of specific kinds.

The values defined in this configuration document are applied to the task database using the archiveOldTasksAction workflow action. If a workflow route declaring a archiveOldTasksAction is not performed the values defined in this document will have no effect. The expectation is that a route declaring this action will be created and scheduled to be performed at some regular interval. Sites generating more tasks are encouraged to perform the auto-archiving work-flow more frequently than sites generating few tasks.

Author: Adam Tauno Williams

by whitemice at March 07, 2016 02:18 PM

March 01, 2016

OpenGroupware (Legacy and Coils)

New Feature: PDF Scrubbrush

While PDF is intended to provide an entirely portable mechanism for exchange of non-trvial documents. However, in practice documents created by the real-world variety of clients almost inevitably contain deviations from the PDF standard which create issues when the document is processed by other applications and platforms. To ensure maximum compatibility the scrub brush feature re-compiles documents on the server using the using the Poppler libraries.

[workflow tmp]# pdffonts prescrubbed-document.pdf 
name                                 type              encoding         emb sub uni object ID
------------------------------------ ----------------- ---------------- --- --- --- ---------
CCQPLO+Arial                         CID TrueType      Identity-H       yes yes yes     18  0
ArialMT                              TrueType          WinAnsi          no  no  no      26  0
Arial-BoldMT                         TrueType          WinAnsi          no  no  no      28  0
QVDGUR+MinionPro-Regular             CID Type 0C       Identity-H       yes yes yes     30  0
FZPPKI+MinionPro-Regular             CID Type 0C       Identity-H       yes yes yes     38  0
Helvetica-Bold                       Type 1            Custom           no  no  no      52  0
Helvetica                            Type 1            Custom           no  no  no      58  0
ZapfDingbats                         Type 1            ZapfDingbats     no  no  no     188  0

Text 1: The pdffonts report of a document which references non-standard fonts but does not contain the fonts. This document is unlikely to render correctly by clients on platforms other than that which created it.

The most common defect is that the PDF references non-standard fonts which are also no embedded into the PDF document – such fonts will either not display when viewed on other clients or may be replaced, often unsuccessfully, based on the viewers font substitution tables. PDF documents can be examined using the pdffonts tool provided by the Poppler project.

[workflow tmp]# pdffonts scrubbed-document.pdf 
name                                 type              encoding         emb sub uni object ID
------------------------------------ ----------------- ---------------- --- --- --- ---------
KOFYDJ+LiberationSans                TrueType          WinAnsi          yes yes yes      5  0
RXEXJO+LiberationSans-Bold           TrueType          WinAnsi          yes yes yes      6  0
IOKNUX+Arial                         TrueType          WinAnsi          yes yes yes      7  0
AADISD+MinionPro-Regular             CID Type 0C       Identity-H       yes yes yes     10  0
AGFLBT+MinionPro-Regular             CID Type 0C       Identity-H       yes yes yes     11  0
THKLNC+NimbusSanL-Bold               Type 1            WinAnsi          yes yes yes     12  0
CIXCHU+NimbusSanL-Regu               Type 1            WinAnsi          yes yes yes     13  0
QAOVNF+Dingbats                      Type 1            Builtin          yes yes yes     14  0

Text 2: The same document as previously after being processed by the scrubbrush; fonts have been substituted based on Poppler's font substitution tables, and those fonts are now all embedded in the document. This document should render consistently regardless of client application or platform.

The scrub brush feature is available in the following workflow actions:

  • searchDocumentsToZIPFileAction
  • folderToZipFileAction

In the future the scrub brush feature will be made available in the messageToINBOXAction and documentToMessageAction workflow actions.

Author: Adam Tauno Williams

by whitemice at March 01, 2016 02:11 PM

January 27, 2016

As it were ...

Saturn Run

Saturn Run Book Cover Saturn Run
John Sandford, Ctein,
Putnam Juvenile
October 6, 2015

I’ve been in a sci-fi book club now for 5 years, and I don’t think I’ve ever reviewed a single book we’ve read.  I’ll try to rectify that, but we’ll see.  Here’s the start.

Saturn Run is sort of a first contact story. It’s set in 2066 and the years after, and someone notices an interstellar ship dock near Saturn and then leave. The US and China begin a space race to see who can get people out there first.

The book is heavy on hard science.  There’s even an essay at the end about the science, and which parts were totally bogus (only one part really). I liked that, it put me in mind of The Martian.

The majority of the book is about the travel from Earth to Saturn, and mostly from the US ship’s viewpoint. Near the end we get some good character development on the Chinese side, and some good interaction with what they find orbiting Saturn, which I am NOT going to tell you about.

The only thing I really didn’t like was that there were a few loose ends that weren’t REALLY tied up. The tying was sort of an offhanded thing and left as many questions as answers.  On the other hand, there was one loose end I never expected to be tied up that WAS, and I thought it was very classy.

On the whole I highly recommend this book, especially if you like hard science sci-fi.

by topher at January 27, 2016 03:46 PM

November 23, 2015

As it were ...

A Year of HeroPress

It was one year ago on 21 November that my boss emailed me and told me it was time to do something different.  “I want you to do something special for WordPress” he said.  I knew right then that life would never be the same, and I was right.

I didn’t know it for a couple months, but that’s where HeroPress was really started.  In the last year I’ve had some amazing experiences, some hard times, and felt wonderful support from my family, friends, and people I’d never met before.

I’ve been to India and met literally hundreds of new people, many of whom are now dear friends.

I’ve failed and I’ve succeeded.

I have a completely new job doing something I really enjoy, but has nothing to do with anything I was doing at the start of the year.

Now I’m winding up 2015 with a sense of peace and accomplishment.  I’m proud of HeroPress and happy with where it’s going.

A giant thank you to everyone who’s been involved: my family, Dave Rosen, everyone at Pressnomics last year, everyone who commented on WPTavern about HeroPress, and everyone who’s contributed an essay.  Relatively speaking, HeroPress is made up far more of all of you than it is of me.

Thank you.

by topher at November 23, 2015 09:12 PM

August 28, 2015

Ben Rousch's Cluster of Bleep

Kivy – Interactive Applications and Games in Python, 2nd Edition Review

I was recently asked by the author to review the second edition of “Kivy – Interactive Applications in Python” from Packt Publishing. I had difficulty recommending the first edition mostly due to the atrocious editing – or lack thereof – that it had suffered. It really reflected badly on Packt, and since it was the only Kivy book available, I did not want that same inattention to quality to reflect on Kivy. Packt gave me a free ebook copy of this book in exchange for agreeing to do this review.

At any rate, the second edition is much improved over the first. Although a couple of glaring issues remain, it looks like it has been visited by at least one native English speaking editor. The Kivy content is good, and I can now recommend it for folks who know Python and want to get started with Kivy. The following is the review I posted to Amazon:

This second edition of “Kivy – Interactive Applications and Games in Python” is much improved from the first edition. The atrocious grammar throughout the first edition book has mostly been fixed, although it’s still worse than what I expect from a professionally edited book. The new chapters showcase current Kivy features while reiterating how to build a basic Kivy app, and the book covers an impressive amount material in its nearly 185 pages. I think this is due largely to the efficiency and power of coding in Python and Kivy, but also to the carefully-chosen projects the author selected for his readers to create. Despite several indentation issues in the example code and the many grammar issues typical of Packt’s books, I can now recommend this book for intermediate to experienced Python programmers who are looking to get started with Kivy.

Chapter one is a good, quick introduction to a minimal Kivy app, layouts, widgets, and their properties.

Chapter two is an excellent introduction and exploration of basic canvas features and usage. This is often a difficult concept for beginners to understand, and this chapter handles it well.

Chapter three covers events and binding of events, but is much denser and difficult to grok than chapter two. It will likely require multiple reads of the chapter to get a good understanding of the topic, but if you’re persistent, everything you need is there.

Chapter four contains a hodge-podge of Kivy user interface features. Screens and scatters are covered well, but gestures still feel like magic. I have yet to find a good in-depth explanation of gestures in Kivy, so this does not come as a surprise. Behaviors is a new feature in Kivy and a new section in this second edition of the book. Changing default styles is also covered in this chapter. The author does not talk about providing a custom atlas for styling, but presents an alternative method for theming involving Factories.

In chapter six the author does a good job of covering animations, and introduces sounds, the clock, and atlases. He brings these pieces together to build a version of Space Invaders, in about 500 lines of Python and KV. It ends up a bit code-dense, but the result is a fun game and a concise code base to play around with.

In chapter seven the author builds a TED video player including subtitles and an Android actionbar. There is perhaps too much attention paid to the VideoPlayer widget, but the resulting application is a useful base for creating other video applications.

by brousch at August 28, 2015 01:16 AM

August 22, 2015

OpenGroupware (Legacy and Coils)

Task Rules

The OpenGroupware Coils Logic layer provides a simple rule system to help drive tasks along a user work-flow and to facilitate consistency for tasks of a specific type. Use of rules can help simply the logic of client applications and ensure consistent chains of events occur when tasks are modified by multiple applications.

Task rules are stored in the “Rules/Tasks” folder of project 7,000 as YAML documents. These files can be created, edited, and deleted via WebDAV at /dev/Administration/Rules/Tasks. The name of a rule-set is the kind string of the task with a .yaml extension. For example the rules defined in the YAML document presented as /dav/Administration/Rules/Tasks/myExample.yaml will be processed by any update to or action on a task with a kind of myExample.

Each rule is a dictionary of two keys: match and apply. The value of match is a list of criteria which is used to determine if the rule applies to the task, if the rule matches the task the values specified in the apply list are applied to the task. A rule may optionally have keys name, description, and action. If defined the description value should be a human readable explanation of the purpose and intent of the rule; this value may be multi-line text as supported by the YAML parser. The value of the name key is used then the application of the rule is recorded in the task's audit log and should be a simple single-line string; when a rule without a name is applied to a task the audit log will refer to the rule as unnnamed.

The criteria of match are dictionaries of three keys: key, value, expression. Each match criteria compares the attribute of the task specified by key with the provided value using the requested expression. The expression key is optional; if not provided the default expression is equality. If the value of key begins with the curly brace character [“{“] the key is assumed to indicate an object property in the form of "{namespace}attribute"; as object property values are typed the comparison is made against value as the persisted type. A comparison against an object property which does not exist is always false regardless of expression. All match criteria must evaluate to true for the rule to apply to the task; the server ceases to process criteria as soon as any criteria evaluates to false.

Expression Description
EQUALS Returns true if the value of the specified task attribute is equal to the value.
ISNULL Returns true if the value of the specified task attribute is NULL.
ISNOTNULL Returns true if the value of the specified task attribute is not NULL.
NOTEQUALS Returns true if the value of the specified task attribute is not equal to the value.
IN Returns true if the value of the specified task attribute is not found in the enumeration provided as the value.
NOTIN Returns true if the value of the specified task attribute is not found in the enumeration provided as the value.
MEMBEROF The comparison value must be an integer value. This value is assumed to be an object id. The expression returns true if the value identities an object (team or account) which is a context held by the object identified by the task attribute. For example if the attribute used from a task is executant and the comparison value is 10003 [the all-intranet team] this expression will evaluate to true if the executant is a member of the all-intranet team.
NOTMEMBEROF The logical opposite of MEMBEROF, evaluates to true if the value as object id does not hold the context specified identified by the task attribute.

If the expression value is any of other than a supported types an exception will be raised – this will cause the update to or action upon the task to fail. All rules specified in a rule set must be syntactically valid.

The apply section of each rule which matches the task will be used to set the specified values of the task. If multiple rules apply each will be processed in turn; there is no specified order by which rules are processed so care should be take to avoid potentially overlapping rules.

The value of apply must be a dictionary. The keys of the dictionary identify the attributes of the task5 to which the value will be copied, overwriting the value of the task attributes. A key which begins with the curly brace [“{“] character is assumed to name an object property with the form of “{namespace}attribute”; the object property will persist both the type and value of the value assigned. For task attributes the value assigned must correspond with the type of the attribute6 [in the ORM].

- apply:
  - {key: keywords, value: it-task}
  - {key: "{}testProp", value: fred123}
  - {key: executor_id, expression: MEMBEROF, value: 11530}
  name: myExampleRule
  description: set keywords and object property if the executor is a member of team 11530

When a value is copied to a task attribute as the application of a rule a 10_commented audit entry is created on the task describing what attribute was changed and the name of the rule which was applied; if a rule has no name the change is recorded as from rule unnamed.

Note that task rules are applied only upon a task action – such as task creation, update, accept, comment, etc... Direct modification of an object property will not invoke the processing of task rules, nor will adding or removing object links, or uploading or deletion of attachments.

Task rule-sets can be verified using the toolbox protocol [see documentation of the toolbox protocol in WMOGAG].

- apply:
  - {key: "{}autoArchived", value: 'YES'}
  - {key: owner_id, expression: MEMBEROF, value: 955840}
  - {key: state, expression: EQUALS, value: 00_created}
  name: archiveMVPskuAdd
  description: Automatically archive SKU adds from MVP employees
  action: archive

A task rule which matches tasks in the 00_created state whose owner is a member of the team OGo#955840. An object property will be created on matching tasks and the tasks will be automatically archived.

A task rules may also contain an action value which must specify a valid task action: reject, accept, comment, done, reactivate, or archive. If the rule matches the task this action will be performed on the task with a comment indicating the name of the rule which specified the action occur; if an action is performed by a rule not having a name the rule invocation will be recorded as unnamed. Additional rules will not be applied to the task as a result of rule driven actions; this prevents creating a rule loop.

A rule file may contain as many rules as necessary.

- apply:
  name: rule1
- apply:
  name: rule2
- apply:
  name: rule3

Order of operations when planning task rules is important. Whatever operation is performed on the task is performed first [create, update, or action], then rules are checked. Each rule that matches will apply the specified values and then the specified action, if any, will be performed – after the values of the task have been updated by the rule. As previously stated multiple rules from a rule-set may apply to a task, however there is no order to how rules are processed.

by whitemice at August 22, 2015 02:46 PM

August 19, 2015

As it were ...

Flowers of the evening

Every now and then I take a picture that I’m really pleased with.  The other day we were at my in-laws farm, just home from the Buckley Old Engine show, and the evening sun was skimming across all the wildflowers in the barn yard.  I pulled out my phone and got this one.

Wild thistle in the summer evening sunI love the way it looks like a water color when you zoom in to 100% on the full size.

Zoomed photo of wild thistle, looking like water color.

by topher at August 19, 2015 12:29 PM

August 18, 2015

As it were ...

Buckley Old Engine Show 2015

Welcome to the Buckley Old Engine Show

Every year my family goes to the Buckley Old Engine Show. It’s kind of like the fair, but without rides. Lots of displays, shows, and deep fried everything.

I took a number of very short videos with my phone, which are in a YouTube playlist below.

I also took some pictures:

20150814_164434 resized-20150814_162317 resized-20150814_162332 resized-20150814_162405 resized-20150814_162444 resized-20150814_162600 resized-20150814_162636 resized-20150814_162703 resized-20150814_162708 20150814_171305 resized-20150814_162732 resized-20150814_163257 resized-20150814_163312 resized-20150814_163320 resized-20150814_163346 resized-20150814_163437 20150814_162342 resized-20150814_163653 resized-20150814_163810 resized-20150814_164422 resized-20150814_165439 resized-20150814_165459 resized-20150814_165515 resized-20150814_165531 resized-20150814_165630 resized-20150814_165644 Orchard Tractor 20150814_163836 Orchard Tractor resized-20150814_165724 resized-20150814_165758 resized-20150814_165809 resized-20150814_165854 resized-20150814_165911 resized-20150814_165930 resized-20150814_165940 resized-20150814_165953 resized-20150814_170002 resized-20150814_170035 resized-20150814_170041 resized-20150814_170124 resized-20150814_170144 resized-20150814_170202 resized-20150814_170214 resized-20150814_170225 resized-20150814_170259 resized-20150814_170306 resized-20150814_170420 resized-20150814_170547 resized-20150814_170554 resized-20150814_170839 resized-20150814_170928 resized-20150814_171137 resized-20150814_171153 resized-20150814_171251 resized-20150814_171319 resized-20150814_171326 resized-20150814_171533 resized-20150814_171548 resized-20150814_171735 resized-20150814_171748 resized-20150814_172602 resized-20150814_172901 resized-20150814_172908 resized-20150814_172923 resized-20150814_172935 resized-20150814_173007 resized-20150814_174956 resized-20150814_180551 The first riding lawnmower resized-20150814_190346 resized-20150814_190349

by topher at August 18, 2015 12:00 PM

August 07, 2015

OpenGroupware (Legacy and Coils)

Conversion Of Repositories to git

The OpenGroupware Code repository has been converted from Mercurial to Git. The Mercurial repositories are no longer available.

New Code Repo URLS:

  • Read Only : git clone git:// coils-code
  • Read/Write : git clone ssh://{USERNAME} coils-code

by whitemice at August 07, 2015 11:32 AM

August 06, 2015

OpenGroupware (Legacy and Coils)

OpenGroupware Server Side Filters

OSSF [OpenGroupware Server Side Filter] modules provide for server-side transformations of work-flow messages and groupware documents via simple HTTP GET requests. Use of OSSF facilitates the simplification of client applications by allowing them to offline some complex and expensive operations to the OpenGroupware Coils server. OSSF modules can be used when requesting message or document content via either the AttachFS or the WebDAV presentation protocols.

All activation of OSSF is performed by adding parameters to the URL when retrieving the document or message. The URL parameter ossfchain activates a sequence of service-side filters. The value of this parameter is a comma-separated value of filters to activate; the filters will be daisy-chained in the order they are specified in the parameter. Parameters can be specified for each filter by passing those parameters in the URL prefixed by the appropriate filter's sequence in the OSSF chain. So a parameter of will pass the specified value as the parameter name to the first filter in the chain. A value of will do the pass the parameter to the second filter in the chain, and so on.

Even if only one filter is specified the parameters for that filter must be prefixed with "0.".

If a filter requires no parameters then no parameters need to be specified.

The sequence of filters may be important; in some cases a filter may change the MIME type of the data-stream and some filters will only accept input of an appropriate type. For example in count mode the json OSSF will change the MIME-type of the document from "application/json" to "text/plain" The pdftotext OSSF will also change the MIME type from "application/pdf" to "text/plain".

Currently implemented OSSF modules are:

  • json - Supports counting, paginating, and filtering JSON content.
  • pdftotext - Returns just the plain text portion of a PDF document.
  • thumbnail - Resizes image content allowing the client to quickly download just a thumbnail of the specified document.
  • autocrop - Reduce an image to the bounding box defined by the presence of an data; this removes the transparent margins around an image.
  • __rml - Transform RML content into a PDF document.
  • background - Overlay an image onto a solid background color of the same size as the image.
  • markdown - Transform Markdown input into HTML.

Some OSSF Modules In Detail


The json OSSF allows the client to request information about the contents of a JSON document or retrieve only a selected portion of the document. The primary use-case for the json OSSF is the instance where a client application needs to access a large JSON document; this can be slow and resource intensive if the client does not itself support streaming parsing or caching. With the json OSSF the client can paginate through the document, or filter it to just the desired records, just by specifying URL parameters.

The json OSSF can operate in one of two modes: count and pagination. Mode is specified via the mode parameter. In both modes the filter requires a path parameter; this value specifies the path to the elements of the JSON document that should either to be returned either to the client or counted.

No additional parameters are supported in count mode. The result of count mode is a "text/plain" response indicating the number of records found in the JSON document.

In pagination mode a range parameter is supported; if no range is specified pagination will always process the entire input stream. The value format for range is two integers separated by a hyphen, such as "1001-2000”,to indicate a specific range or a head / tail range such as "-1000” or "1001-” which will return all items up to or following the specified value. Ranges are inclusive; so a range of "1-5” will return five elements [if available].

The criteria parameter allows the data elements from the JSON input document to be filtered based upon value; all the elements matching the specified path are evaluated until the specified range is filled or the input stream is exhausted. Criteria is specified in the form of "key,value”. If the key can be interpreted as an integer value all list, dictionary and string elements matching the path are evaluated; the key is used as a key for dictionary values and as an index for list and string elements. If the key is not numeric only dictionary elements are evaluated. Numeric types, either integer or decimal, cannot be evaluated with the criteria parameter. All evaluation is based upon equality; comparisons to character or string types are case-sensitive. When specifying a key to act as an offset in a string or list the first character of the value is 0.,70801310'

The URL to filter the JSON output message of process 17945730; this will return the first five elements [range=1-5] of the outer-most list [path=item] when the second element of the list has the value “70801310” [criteria=1,70801310].

The pdftotxt OSSF

The “pdftotxt” filter will attempt to return all of the text from the PDF input stream. The output of this filter is primarily intended to facilitate searching. The output is unstructured text with a MIME type of “text/plain”.

The thumbnail OSSF

The thumbnail filter will modify either a PNG or JPEG image to be no larger than any of the dimensions specified; desired maximum width and height are specified using the width and height parameters, respectively. If either width or height are not specified then they default to the original width or height of the image, respectively. The aspect ratio of the image is preserved.

Thumbnails are generated using a high-quality anti-aliasing filter.

An input stream representing data other than a PNG or JPEG will raise an exception thus terminating the filter chain and returning an HTTP error code for the clients request. a 200x200 thumbnail of the specified groupware document.

In Closing

OSSF makes some tedious things easier for your custom web UI or other client.

If you have an idea or need for a useful OSSF join the coils-project mail list and let's discuss implementing it. Building an OSSF is almost as easy as using them. More detail regarding OSSF is available in WMOGAG (Whitemice Consulting OpenGroupware Adminsitrator's Guide).

by whitemice at August 06, 2015 08:55 PM

Using AttachFS To Test XSLT Transforms

We have previously demonstrated testing format descriptions using AttachFS. Now with OpenGroupware Coils 0.1.49rc91 you can also list, retrieve, and test XSLT templates stored in the workflow engine [OIE]. XLST transforms are accessible via AttachFS requests at the URL /attachfs/workflow/xslt.

A simple GET request will return a JSON list of the XSLT transforms defined in the workflow engine.

The text of an XLST transform can be retrieved by name as /attachfs/workflow/xslt/templateName.xslt where templateName.xslt is the full name of the template [including the "xslt" file extension]. The response will have a type of application/xml and an Etag which is an MD5 sum of the template contents.

Transforms may be invoked by performing a PUT request to the URL /attachfs/workflow/xslt/templateName.xslt/transform where templateName.xslt is the full name of the template [including the "xslt" file extension]. The Content-Type of the payload of the PUT request must be appropriate content for the transformation. The result of a successful transform is assumed to be application/xml; if the transform is expected to result in another MIME-type, such as text/plain, the result MIME-type may be overridden my passing a URL parameter of mimetype.

OIE XSLT extension methods are enabled in the transform test as documented for the transformAction; the processId URL parameter allows a process object id to be specified in order to provide a process context for extension method execution. The specified process must exist and be readable by the current security context.

OSSF modules are supported for post-processing transformation results.

curl -v -o response.pdf -H 'Content-Type: application/xml' \
  -T structured.xml -u fred:secret \ '    

Text 1: Transform the contents of the file "standard.xml" using the transform TRWorkorderToRML within the context of the OIE process 347004869. After transformation the data will be processed by the OSSF module "rml".

by whitemice at August 06, 2015 08:40 PM

Whitemice Consulting

Cut-N-Paste Options Greyed Out In Excel

Yesterday I encountered a user who could not cut-and-paste in Microsoft Excel. The options to Cut, Copy, and Paste where disabled - aka 'greyed out' - in the menus. Seems like an odd condition.

The conclusion is that Excel's configuration had become corrupted. Resolution involves exiting Excel, deleting Excel's customized configuration, and then restarting the application. Lacking a set of configuration files the application regenerates a new default configuration and cut-and-paste functionality is restored.

Excel stores its per-user configuration in XLB files in the %%PROFILEDIR%%\AppData\Roaming\Microsoft\Excel folder. Navigate to this folder and delete all the XLB files - with all Microsoft Office applications shutdown.

After resolving this issue I found a more user approachable solution - no diddling in the file-system - but with Excel now working I was not able to verify it [and I do not know how to deliberately corrupt Excel's configuration].

  1. Right click on a sheet tab and select "View Code"
  2. From the "View" menu select "Immediate Window" if it's not already displayed.
  3. Paste the following into the "Immediate Window" and press enter: Commandbars("Cell").Reset

Of course, deleting the per-user configuration in Excel will delete the user's customizations.

by whitemice at August 06, 2015 11:06 AM

July 01, 2015

As it were ...

Carving out a profession

I remember well the day my little girl told me she was going to write a book.  She had a notebook and pencil in her hands.  I told her that was great, and I was excited to read it.

She sat at the table bowed her head toward the notebook and painstakingly started writing words in cursive.  Great big words, using two rows of the notebook for each sentence, her toungue stuck out the side of her mouth while she concentrated.

I knew she was serious when she filled the entire notebook and told me book one of her series was complete, and she needed another notebook.

I had always dreamed that my kids would follow in my career path, and become web developers.  It seems so fun and fulfilling to me, as well as relatively easy money for teens; how could they not love it?

Alas, no interest has been shown.  That said, almost exactly a year ago our entire family went to WordCamp Chicago.  Each of us went as attendees.  My little girls each got their own schedule and worked through it, deciding what they wanted to learn about.  My youngest, Sophia,  tended toward design sessions, my eldest, Ema, toward content sessions.

I had built them each a WordPress site before WordCamp,  taught them a tiny bit about html and CSS, and they each started blogging some.  Ema started blogging lots about Zelda and Pokémon etc., Sophia more about life experiences.

One day Ema asked me if she could start a new book on her blog.  I told her of course!  Then she went away and started writing.

I remember well the day I realized my little girl had become a Writer.  It was when she told me she had acquired an Editor.  And it was someone I didn’t know.

Then she started collaborating with another young man on a second novel, writing two concurrently.

I wondered if perhaps this was a passing interest, something that she would tire of when something new and exciting came up.  But then I began to see signs of the true Writer.  The need to write, and the pain when the words won’t come.  The angst over plot lines, character development; it was all there.

Someone once said “A Writer is someone who writes”.  My little girl is a Writer.

This week she published an essay on HeroPress about the impact WordPress has had on her life.  I don’t know if she’ll  make a living by writing, but I’m pretty confident she’ll be a Writer for the rest of her life.

I’m proud of you Ema.

by topher at July 01, 2015 01:00 PM

June 29, 2015

As it were ...

Installing apacman in Arch Linux

The Arch User Repository is one of the jewels of the Arch Linux distribution in my opinion.  The catch is that it CAN be difficult to install packages from it without a helper.  And the helpers are in the AUR, leaving us with a chicken and egg situation.  Here’s my recommendation on how to install apacman.

GitHub Clone URLGo to the github repo for apacman.  In the right column near the bottom you’ll see a small form with the clone URL:

Make sure you choose HTTPS, unless you already have ssh keys set up.

Copy the https url and then open up a terminal on your local machine.  Run this command:

git clone

That will download all the proper files and put them into a directory called apacman.  Cd into that directory and run this command:

./apacman -S apacman

and follow the instructions.  When it’s done apacman will have been installed from the AUR.  Then you can cd back up a directory and remove this local apacman directory with

rm -R apacman

At this point you can now run apacman from anywhere.

by topher at June 29, 2015 12:48 PM

June 22, 2015

As it were ...

A trip the greenhouse

We stopped by Country Side Greenhouse the other day, and I always end up taking pictures when we go.

wpid-20150620_143107.jpg Little yellow flower wpid-20150620_142724.jpg wpid-20150620_144030.jpg wpid-20150620_143502.jpg wpid-20150620_143155.jpg wpid-20150620_144816.jpg wpid-20150620_150325.jpg

by topher at June 22, 2015 05:07 PM

May 19, 2015

OpenGroupware (Legacy and Coils)

A Logic Example: Linking Documents To A Task

Most operations are performed by clients using some protocol over HTTP: XML-RPC, JSONRPC, REST, WebDAV, etc... It is also possible to utilize Python on any OpenGroupware Coils node to perform operations to use the Logic commands directly. The following example demonstrates how to create an object link from the specified task, OGo#318084549, to all the documents in folder OGo#231408659, setting the label of the link to the document's display name.

from coils.core import \
    AdministrativeContext, \
    initialize_COILS, \

TASK_OBJECTID = 318084549

if __name__ == '__main__':
    ctx = AdministrativeContext()
    task = ctx.r_c('task::get', id=TASK_OBJECTID)
    folder = ctx.r_c('folder::get', id=FOLDER_OBJECTID)
    for content in ctx.r_c('folder::ls', folder=folder):
        if isinstance(content, Document):

The call to initialize_COILS sets up the Coils runtime environment discovering the Logic bundles, etc... Then an AdministrativeContext is created - in OpenGroupware, both Legacy and Coils, nearly all operations are performed via a Context object. The Context provides the security context and identity used for all the Logic operations performed by calling run_command [for which r_c is a shortcut] as well as the database session and state [see the final call to the context's commit]. Context also provides a variety of manager object instances:

  • link_manager for handling object links
  • property_manager for handling object properties
  • type_manager for entity introspection
  • defaults_manager for user defaults
  • lock_manager for taking and releasing entity locks

Many of the Logic commands utilized through run_command use these same manager instances from the context they are run in.

The AdministrativeContext assumed the security context of OGo#10000 - the OpenGroupware superuser. No security restrictions apply to AdministrativeContext. If operations should be performed as a user the AssumedContext can be used; the object id of the desired context is specified when the context is created.

ctx = AssumedContext(10100)  # Operate as user OGo#10100

Context objects of AnonymousContext having essentially no right beyond globally readable objects and NetworkContext having the rights of a network service are also available.

Using context objects and Logic commands scripting even complex operations can be accomplished relatively simply.

The one caveat with scripting on the OpenGroupware Coils node is to ensure you execute the script as the OpenGroupware user - typically ogo. If scripts are executed as another use it is possible to create files withing the OpenGroupware server root which will be inaccessible to server components.

by whitemice at May 19, 2015 07:59 PM

Whitemice Consulting

Which Application?

Which application manages this type of file? How can I, by default, open files of type X with application Y? These questions float around in GNOME forums and mailing lists on a regular basis.

The answer is: gvfs-mime .

To determine what application by default opens a file of a given type, as well as what other applications are installed which register support for that file-type, use the --query option, like:

awilliam@GNOMERULEZ:~> gvfs-mime --query text/x-python
Default application for 'text/x-python': org.gnome.gedit.desktop
Registered applications:
Recommended applications:

Applications register support for document types using the XDG ".desktop" standard, and the default application is stored per-user in the file $XDG_DATA_HOME/applications/mimeapps.list. In most cases $XDG_DATA_HOME is $HOME/.local/share [this is the value, according to the spec, when the XDG_DATA_HOME environment variable is not set].

Not only can gvfs-mime query the association database it can be used, by the user, to set their default handler - simpler than attempting to discover the right object to right-click.

awilliam@@GNOMERULEZ:~> gvfs-mime --set text/x-python geany.desktop
Set geany.desktop as the default for text/x-python
awilliam@@GNOMERULEZ:~> gvfs-mime --query text/x-python
Default application for 'text/x-python': geany.desktop
Registered applications:
Recommended applications:

Python files are now, by default, handled by Geany.

by whitemice at May 19, 2015 11:12 AM

May 12, 2015

OpenGroupware (Legacy and Coils)

Using AttachFS To Test Format Descriptions

Format descriptions are one of the handiest features of the OIE workflow engine - no more writing code to read/write all the various files that are received or need to be created; OIE formats can read and write fixed-record length files, delimited files, XLS files, and DIF files. Already built-in to the various format descriptions are various work-arounds for the ways in which each of these files can be mangled. OIE's XLS reading, for example, can deal with spreadsheets where those numbers have actually been occasionally entered as strings, and the delimited file reader can cope with randomly placed escape characters dropped in by applications unaware of what an escape character is. But how do you know, other than trying to run the full process to import the data, if your format description matches the file? Or later on how do you know that the vendor/supplied/department/customer hasn't changed their format without informing you - again without running the process? Answer: Format descriptions can be tested via the AttachFS protocol.

Note: That this usage of formats is intended for debugging use and for use by workflow developers maintaining format descriptions; processing of the test data is synchronous and may not scale to extremely large data sets. If your data is very large test with a subset. For example take the first several thousand lines of a fixed-record length file, etc... This feature is not intended as a production means for translating data; that is the purpose of OIE's readAction and writeAction workflow actions.

The URL to access format description tests is "/attachfs/workflow/format/formatName/verb" where formatName is the name of the format description and verb is the action to be performed using the format: read, write, or readwrite.

  • read : Read the uploaded data via the format description producing StandardXML. The Content-Type of the response will be "application/xml" with the filename "standard.xml".
  • write : Write the uploaded data, typically StandardXML, via the the format description. The Content-Type of the response will be dependentTesting Format Descriptions
  • readwrite : Reads the submitted data via the format description and then writes it out using the same format. This provides a means to test the round trip validity of a format description. The Content-Type of the response will be dependent on the MIME type produced by the format class. The filename of the response data will be “”.

If the posted data can be processed by the specified format the response will be HTTP/200 and the payload of the response will be the result of the processing format. The file-name for the result will be included in both the disposition of the response and as the value of the X-OpenGroupware-Filename header. While Etag headers are provided in the format test responses they are nothing more than a time-stamp in order ensure no content from tests is cached by clients or intermediate proxies.

$ curl -vvv -o -u fred -T APNETCOM \

Text 1: PUT the contents of the local file APNETCOM in order to test reading the data via the CTABSAPPaymentReceipt format description

If the processing the posted data via the format fails a response of HTTP/418 “Teapot” will be returned to the client. In this case the payload of the response will be a text/plain stack trace of the exception which interrupted processing. A response of HTTP/500 indicates an error occurred outside the format operation.

A GET request may also be performed for the URL "/attachfs/workflow/format/formatName", where formatName is the name of the format description; this retrieves the raw YAML markup of the format description.

by whitemice at May 12, 2015 10:53 AM

May 07, 2015

Ben Rousch's Cluster of Bleep

My Farewell to GRMakers

Many of you have seen the recent board resignations and are wondering what the heck is going on over at GR Makers. We each have our own experiences, and I will set out mine here. It is a long story, but I think you deserve to hear it, so you can draw your own conclusions. I encourage you to reply to me personally ( or via the comments on this blog post if you’d like to provide clarifications or additions to what I have to say.

I joined GR Makers not so much to make things, but to have an excuse to hang out with the most interesting group of people I’d ever met. That group started as half a dozen open source enthusiasts gathering at weekly Linux user group meetings at coffee shops, and grew to a much larger, more diverse, and eclectic gathering of developers, inventors, designers, electronics hackers, and much more thanks to Casey DuBois’ welcoming personality, non-judgemental inclusiveness, and networking prowess. A part of what brought the group together was an unstructured openness that made everyone feel like they had a say in what we were doing. When the group grew too large to continue meeting in Casey’s garage, several regulars looked around for ways of keeping the group together and growing in other locations.

Mutually Human Software offered a physical space and monetary support to keep the group together, but we had to change how the group was run. Since MHS was providing so many resources, they would own the group. There was a large meeting to decide if this was the way we wanted to go. The opinions were divided, but in the end we had to take this deal or disband the group because we’d have nowhere to meet. Casey took a job with MHS, and over the course of two years we slowly became a real makerspace. Casey continued to make connections between GR Makers, companies who donated equipment and supplies, and the community. The Socials became bigger, and so did the space.

As we grew, communication became a problem. If you didn’t attend the weekly socials and talk to Casey in person, you had no idea what was going on. Even those of us who were regularly there had no idea about how the makerspace was being run. An opaque layer existed between the community, and those who actually owned and made decisions affecting the group. Even basic questions from paying members would go unanswered when submitted to the official communication channel. Were we making money? How many members were there? Who are the owners? Is there a board, and if so, who is on it? Who is actually making decisions and how are those decisions being reached? Are our suggestions being seen and considered by these people?

Despite these issues, several interesting initiatives and projects came out of the community and makerspace: the Exposed ArtPrize Project, GR Young Makers, The Hot Spot, and most recently Jim Winter-Troutwine’s impressive sea kayak. I enjoyed the community, and wanted to see it continue to thrive.

I thought the communication problem was problem was one of scale: there was a large community and only a few people running things. I assumed those in charge were simply overwhelmed by the work required to keep everyone informed. In an attempt to fix this problem, I volunteered to write a weekly newsletter which I hoped would act as a conduit for the leadership to inform those who were interested. I asked for a single piece of information when I started the newsletter: a list of board members and what their roles were. I did not receive this information, but went ahead anyways, thinking that it would be sorted out soon. I gathered interesting information by visiting the space and talking to the community at the Socials each week and put it into a digestible format, but still that simple piece of information was refused me. Each newsletter was approved by Samuel Bowles or Mark Van Holstyn before it was sent, sometimes resulting in a delay of days and occasionally resulting in articles being edited by them when they did not agree with what I had written.

Shortly after the first few editions of the newsletter, Casey and Mutually Human parted ways. My conversations with the people who formed that initial core of what became GR Makers revealed a much more systemic problem in the leadership than I had realized. There was indeed a board, made up of those people I talked to. They passed on concerns and advice from themselves and the members to the owners, but that’s all they were allowed to do. The board had no real power or influence, and it turns out that it had never had any. The decisions were being made by two people at MHS who held the purse strings, and even this advisory board was often kept in the dark about what was being decided.

This cauldron of problems finally boiled over and were made public at a town hall meeting on March 25, 2015. Over the course of a week, the advisory board and the owners held a series of private meetings and talked for hours to try to keep GR Makers together. Concessions and public apologies were made on both sides and an agreement was reached which seemed to satisfied nearly everyone. In short, it was promised that the leadership would give the board more powers and would become more transparent about finances, membership, and decision making. This link leads to my summary of that town hall meeting, and a nearly identical version of those notes went out in an approved edition of the newsletter.

The community was relieved that the makerspace we had worked so hard to create was not going collapse, and I assumed that the board was being empowered. Bob Orchard was added to the advisory board and kept and published minutes from the board meetings – something which had not been done previously. These minutes always mentioned requests for the changes that had been agreed upon at the Town Hall, but action on those requests was always delayed. At the board meeting on April 29, the requests were finally officially denied. The minutes from that board meeting can be found here. Most of the board members – including all of the founders of that initial group in Casey’s garage – resigned as a result of this meeting.

It is up to each of us to decide if GR Makers as it exists today meets our desires and needs. There are still good people at GR Makers, but that initial group of interesting people has left. Without them I find very little reason to continue contributing. The ownership structure of GR Makers was an educational and enlightening experiment, but it is not what I want to be a part of. I think the openness and transparency that formed the backbone of that group which became GR Makers is gone, and I don’t think it is coming back. So it is with a heavy heart that I am resigning my membership.

But do not despair. That initial group of friends – that sociable collection of connectors, hackers, inventors, and makers – and a few new faces we’ve picked up along the way, have been talking together. We want to start over with a focus on the community and ideals that existed in the gatherings at Casey’s garage. It may be a while before we have a stable space to meet and tools for people to use, but I hope you’ll join us when we’re ready to try again. If you’d like to be kept up to date on this group, please fill out this short form.

by brousch at May 07, 2015 11:16 PM

May 01, 2015

As it were ...

Moving Forward

edd-standingToday I start a new job.  I’ll be part of the Easy Digital Downloads team; writing docs, teaching things, speaking at WordCamps, and anything else awesome I can think of.  I’m really really excited to be working with the team that’s there.

Why? (and what about HeroPress?)

More than one person has asked me if XWP fired me because HeroPress didn’t get funding from Kickstarter.  I’d like to be very clear and say that did NOT happen.  It’s true that HeroPress didn’t become my full time job, and it’s true that there was no longer a position for me at XWP, but it was in no way a punishment for failure.

The week after the HeroPress Kickstarter failed I spent most of a day on Skype with Dave, our founder, looking for a place for me inside the company.  I realized later that since he’s in Australia he stayed up all night with me.

We found several different options that kept me working until I could decide what I wanted to do.  XWP staff spent time on the clock helping to figure out what *I* wanted to do, whether it be within X-Company or not.  They really went over and above to help *me*, and I’ll be forever grateful.

You didn’t answer the question

The “why” is that my time with HeroPress opened my eyes to a wide range of jobs outside of coding that I could do, and many of them fascinated me.  I’ve always loved teaching, writing, and making contacts with people, and it became apparent that I was good at it, and might be able to make a living at it.

More than one company indicated that I should talk to them if HeroPress didn’t pan out, so that’s what I did.  Most of them weren’t looking for what I wanted to do, but even they spent time to help me figure things out.  WordPress has a good community around it.

Pippin’s been wanting someone to wrangle docs and communicate well for the company for a while now, so when he heard I was available and interested things just clicked.

Ok, but really, what about HeroPress?

When we were working full time on HeroPress there were two of us putting all our time and energy into it, and two others on contract putting a fair amount in as well.

When it became apparent we weren’t going to make a living from it, we all had to get back to work on other things.  The contractors went on their way, Dave went back to running X-Company, and I started coding again.

But I couldn’t let HeroPress go.  With the new direction of text instead of video it’s not nearly as expensive or time consuming.  So I just did it.

I’ve been doing it on my own time with my own direction since the Kickstarter ended.  I honestly don’t know where it’ll go.  I don’t know if it’ll ever support anyone financially in any way or not.  I just know it’s a good thing, and at this point I can afford to run it.

It might end tomorrow, a month from now, or never.  We’ll see what happens.


I mentioned it a bit above, but I wanted to be clear there’s no bad blood between XWP and me.  To the contrary, the team there quickly became like family to me.  They taught me so much, and supported me through some really hard times.

Last fall I did a post thanking each and every one of them, and I still mean every single word.

Here are some of the team from last spring:

Some of the XWP crew in Austin, Spring 2014


Thanks so much for everything gang, you changed my life.

by topher at May 01, 2015 08:43 PM

April 30, 2015

As it were ...

A Weekend In Texas

This last weekend the whole family plus Cate’s grandma flew down to Texas.  Grandma’s going to stay a few weeks, and we were her escort.  Originally only Cate and Sophi were going to go with her, but then we decided Em and I could go as well.  This meant we ended up on different flights some of the time though, which was interesting.

Their place is on the outskirts of a very small town, so it was nice to sleep at night with the windows open and listen to the crickets etc.

It rained sporadically the entire time we were down there.  That’s not to say we had gloomy days; rather they were nice sunny days and every 6 or 8 hours there would be a crazy thunderstorm.  One evening we had big hail:

Here are some other pictures:

20150426_151804 20150426_155612 20150426_160822 20150426_154612 20150426_161140 20150426_201625

We had a great time travelling with the kids.  It was a little stressful because of the way flights were arranged, but everything worked out great.

by topher at April 30, 2015 01:53 PM