Planet GRLUG

July 03, 2018

As it were ...

A Dream Job?

You may recall that in January of 2017 I started a grand experiment with Tanner Moushey. As experiments go, it was a great success, which is to say we learned a lot. As businesses go, it lasted until Feb of 2018. It was a great experience, and I learned a lot, and it paid the bills for a year, but as any entrepreneur will tell you, it’s a stressful life.

So after February I started looking for a Real Job. I applied to a number of places that didn’t even respond (one of which had approached me first!). I did two trials at Automattic and washed out of both of them. That was a great learning experience as well.

Spring faded into summer, and I was doing contract work to keep bread on the table, but that was getting old.

Then one night at 10pm a couple weeks ago my friend Luke sent me a Slack note, saying he knew of a large company looking for a WordPress evangelist, would I be interested? If you know anything about me then you know I was immediately interested.

He told me a little about it on the spot, but he was in a meeting with them in Sydney at the time (hence 10pm my time). I was a little wary at first. This sounded REALLY good, and I’d already been disappointed by other things this summer.

The next morning I sent an email to The Guy at The Company and we arranged to talk when he got back to Austin.  He basically went from plane ride from Sydney to a meeting with me to jury duty, all in one day. Iron man.

We talked for about 30 min and they said they were sending me an offer as quickly as possible.  Five days later I had an offer and accepted it!

So now I’m the WordPress Developer Evangelist for BigCommerce.  “But wait!” you say. “They don’t do WordPress do they?”.  For the unaware, BigCommerce is a hosted ecommerce solution. You sign up, pay the fee, and *poof* you have a store. Well, recently they decided to get into WordPress, big time. You can read about it here and here.

I’m crazy excited of course. I’ve been looking for a WordPress evangelist job for years, but beyond that I’m also really excited about the product. I know who built it, and I know who’s code reviewing it. I’ve been assured by people I trust that they’re putting the appropriate time and money into this project, and it should be really really solid. The number of good WordPress ecommerce plugins is really low, and some serious competition will only be a good thing I think.

So maybe I’ll be seeing you at a WordCamp soon! Feel free to ask me all the questions.

The post A Dream Job? appeared first on As it were....

by topher at July 03, 2018 02:56 PM

May 29, 2018

Whitemice Consulting

Disabling Transparent Huge Pages in CentOS7

The THP (Transparent Huge Pages) feature of modern LINUX kernels is a boon for on-metal servers with a sufficiently advanced MMU. However they can also result in performance degradation and inefficiently memory use when enabled in a virtual machine [depending on the hypervisor and hosting provider]. See, for example "Use of large pages can cause memory to be fully allocated". If you are issues in a virtualized environment that point towards unexplained memory consumption it may be worthwhile to experiment with disabling THP in your guests. These are instructions for controlling the THP feature through the use of a SystemD unit.

Create the file /etc/systemd/system/disable-thp.service:

[Unit]
Description=Disable Transparent Huge Pages (THP)
[Service]
Type=simple
ExecStart=/bin/sh -c "echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled && echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag"
[Install]
WantedBy=multi-user.target

Enable the new unit:

sudo systemctl daemon-reload
sudo systemctl start disable-thp
sudo systemctl enable disable-thp

THP will now be disabled. However already allocated huge pages are still active. Rebooting the server is advised to bring up the services with THP disabled.

by whitemice at May 29, 2018 07:30 PM

May 06, 2018

Whitemice Consulting

Informix Dialect With CASE Derived Polymorphism

I ran into an interesting issue when using SQLAlchemy 0.7.7 with the Informix dialect. In a rather ugly database (which dates back to the late 1980s) there is a table called "xrefr" that contains two types of records: "supersede" and "cross". What those signify doesn't really matter for this issue so I'll skip any further explanation. But the really twisted part is that while a single field distinquishes between these two record types - it does not do so based on a consistent value. If the value of this field is "S" then the record is a "supersede", any other value (including NULL) means it is a "cross". This makes creating a polymorphic presentation of this schema a bit more complicated. But have no fear, SQLAlchemy is here!

When faced with a similar issue in the past, on top of PostgreSQL, I've created polymorphic presentations using CASE clauses. But when I tried to do this using the Informix dialect the generated queries failed. They raised the dreaded -201 "Syntax error or access violation" message.

The Informix SQLCODE -201 is in the running for "Most useless error message ever!". Currently it is tied with PHP's "Stack Frame 0" message. Microsoft's "File not found" [no filename specified] is no longer in the running as she is being held at the Hague to face war crimes charges.

Rant: Why do developers get away with such lazy error messages?

The original [failing] code that I tried looked something like this:

    class XrefrRecord(Base):
        __tablename__  = 'xrefr'
        record_id      = Column("xr_serial_no", Integer, primary_key=True)
        ....
        _supersede     = Column("xr_supersede", String(1))
        is_supersede   = column_property( case( [ ( _supersede == 'S', 1, ), ],
                                                else_ = 0 ) )

        __mapper_args__ = { 'polymorphic_on': is_supersede }   


    class Cross(XrefrRecord): 
        __mapper_args__ = {'polymorphic_identity': 0} 


    class Supsersede(XrefrRecord): 
        __mapper_args__ = {'polymorphic_identity': 1}

The generated query looked like:

      SELECT xrefr.xr_serial_no AS xrefr_xr_serial_no,
             .....
             CASE
               WHEN (xrefr.xr_supersede = :1) THEN :2 ELSE :3
               END AS anon_1
      FROM xrefr
      WHERE xrefr.xr_oem_code = :4 AND
            xrefr.xr_vend_code = :5 AND
            CASE
              WHEN (xrefr.xr_supersede = :6) THEN :7
              ELSE :8
             END IN (:9) <--- ('S', 1, 0, '35X', 'A78', 'S', 1, 0, 0)

At a glance it would seem that this should work. If you substitute the values for their place holders in an application like DbVisualizer - it works.

The condition raising the -201 error is the use of place holders in a CASE WHEN structure within the projection clause of the query statement; the DBAPI module / Informix Engine does not [or can not] infer the type [cast] of the values. The SQL cannot be executed unless the values are bound to a type. Why this results in a -201 and not a more specific data-type related error... that is beyond my pay-grade.

An existential dilemma: Notice that when used like this in the projection clause the values to be bound are both input and output values.

The trick to get this to work is to explicitly declare the types of the values when constructing the case statement for the polymorphic mapper. This can be accomplished using the literal_column expression.

    from sqlalchemy import literal_column

    class XrefrRecord(Base):
        _supersede    = Column("xr_supersede", String(1))
        is_supersede  = column_property( case( [ ( _supersede == 'S', literal_column('1', Integer) ) ],
                                                   else_ = literal_column('0', Integer) ) )

        __mapper_args__     = { 'polymorphic_on': is_supersede }

Visually if you log or echo the statements they will not appear to be any different than before; but SQLAlchemy is now binding the values to a type when handing the query off to the DBAPI informixdb module.

Happy polymorphing!

by whitemice at May 06, 2018 08:23 PM

Sequestering E-Mail

When testing applications one of the concerns is always that their actions don't effect the real-world. One aspect of that this is sending e-mail; the last thing you want is the application you are testing to send a paid-in-full customer a flurry of e-mails that he owes you a zillion dollars. A simple, and reliable, method to avoid this is to adjust the Postfix server on the host used for testing to bury all mail in a shared folder. This way:

  • You don't need to make any changes to the application between production and testing.
  • You can see the message content exactly as it would ordinarily have been delivered.

To accomplish this you can use Postfix's generic address rewriting feature; generic address rewriting processes addresses of messages sent [vs. received as is the more typical case for address rewriting] by the service. For this example we'll rewrite every address to shared+myfolder@example.com using a regular expression.

Step#1

Create the regular expression map. Maps are how Postfix handles all rewriting; a match for the input address is looked for in the left hand [key] column and rewritten in the form specified by the right hand [value] column.

echo "/(.)/           shared+myfolder@example.com" > /etc/postfix/generic.regexp

Step#2

Configure Postfix to use the new map for generic address rewriting.

postconf -e smtp_generic_maps=regexp:/etc/postfix/generic.regexp

Step#3

Tell Postfix to reload its configuration.

postfix reload

Now any mail, to any address, sent via the hosts' Postfix service, will be driven not to the original address but to the shared "myfolder" folder.

by whitemice at May 06, 2018 08:11 PM

April 22, 2018

Whitemice Consulting

LDAP extensibleMatch

One of the beauties of LDAP is how simply it lets the user or application perform searching. The various attribute types hint how to intelligently perform searches such as case sensitivity with strings, whether dashes should be treated as relevant characters in the case of phone numbers, etc... However, there are circumstances when you need to override this intelligence and make your search more or less strict. For example: in the case of case sensitivity of a string. That is the purpose of the extensibleMatch.

Look at this bit of schema:

attributetype ( 2.5.4.41 NAME 'name'
EQUALITY caseIgnoreMatch
SUBSTR caseIgnoreSubstringsMatch
SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{32768} )
attributetype ( 2.5.4.4 NAME ( 'sn' 'surname' )
DESC 'RFC2256: last (family) name(s) for which the entity is known by'
SUP name )

The caseIgnoreMatch means that searches on attribute "name", or its descendant "sn" (used in the objectclass inetOrgPerson), are performed in a case insensitive manner. So...

estate1:~ # ldapsearch -Y DIGEST-MD5 -U awilliam sn=williams dn
SASL/DIGEST-MD5 authentication started
Please enter your password:
SASL username: awilliam
SASL SSF: 128
SASL installing layers
# Adam Williams, People, Entities, SAM, whitemice.org
dn: cn=Adam Williams,ou=People,ou=Entities,ou=SAM,dc=whitemice,dc=org
# Michelle Williams, People, Entities, SAM, whitemice.org
dn: cn=Michelle Williams,ou=People,ou=Entities,ou=SAM,dc=whitemice,dc=org

... this search returns two objects where the sn value is "Williams" even though the search string was "williams".

If for some reason we want to match just the string "Williams", and not the string "williams" we can use the extensibleMatch syntax.

estate1:~ # ldapsearch -Y DIGEST-MD5 -U awilliam "(sn:caseExactMatch:=williams)" dn
SASL/DIGEST-MD5 authentication started
Please enter your password:
SASL username: awilliam
search: 3
result: 0 Success
estate1:~ #

No objects found as both objects have "williams" with an initial capital letter.

Using extensibleMatch I was able to match the value of "sn" with my own preference regarding case sensitivity. The system for an extensibleMatch is "({attributename}:{matchingrule}:{filterspec})". This can be used inside a normal LDAP filter along with 'normal' matching expressions.

For more information on extensibleMatch see RFC2252 and your DSA's documentation [FYI: Active Directory is a DSA (Directory Service Agent), as is OpenLDAP, or

by whitemice at April 22, 2018 03:14 PM

Android, SD cards, and exfat

I needed to prepare some SD cards for deployment to Android phones. After formatting the first SD card in a phone I moved it to my laptop and was met with the "Error mounting... unknown filesystem type exfat" error. That was somewhat startling as GVFS gracefully handles almost anything you throw at it. Following this I dropped down to the CLI to inspect how the SD card was formatted.

awilliam@beast01:~> sudo fdisk -l /dev/mmcblk0
Disk /dev/mmcblk0: 62.5 GiB, 67109912576 bytes, 131074048 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device         Boot Start       End   Sectors  Size Id Type
/dev/mmcblk0p1 *     2048 131074047 131072000 62.5G  7 HPFS/NTFS/exFAT

Seeing the file-system type I guessed that I was missing support for the hack that is exFAT [exFAT is FAT tweaked use on large SD cards]. A zypper search exfat found two uninstalled packages; GVFS is principally an encapsulation of fuse that adds GNOME awesome into the experience - so the existence of a package named "fuse-exfat" looked promising.

I installed the two related packages:

awilliam@beast01:~> sudo zypper in exfat-utils fuse-exfat
(1/2) Installing: exfat-utils-1.2.7-5.2.x86_64 ........................[done]
(2/2) Installing: fuse-exfat-1.2.7-6.2.x86_64 ........................[done]
Additional rpm output:
Added 'exfat' to the file /etc/filesystems
Added 'exfat_fuse' to the file /etc/filesystems

I removed the SD card from my laptop, reinserted it, and it mounted. No restart of anything required. GVFS rules! At this point I could move forward with rsync'ing the gigabytes of documents onto the SD card.

It is also possible to initially format the card in the openSUSE laptop as well. Partition the card creating a partition of type "7" and then use mkfs.exfat to format the partition. Be careful to give each card a unique ID using the -n option.

awilliam@beast01:~> sudo mkfs.exfat  -n 430E-2980 /dev/mmcblk0p1
mkexfatfs 1.2.7
Creating... done.
Flushing... done.
File system created successfully.

The mkfs.exfat command is provided by the exfat-utils package; a filesystem-utils package exists for most (all?) supported file-ystems. These -utils packages provide the various commands to create, check, repair, or tune the eponymous file-ystem type.

by whitemice at April 22, 2018 02:34 PM

April 03, 2018

Whitemice Consulting

VERR_PDM_DEVHLPR3_VERSION_MISMATCH

After downloading a Virtualbox ready ISO of OpenVAS the newly created virtual machine to host the instance failed to start with an VERR_PDM_DEVHLPR3_VERSION_MISMATCH error. The quick-and-dirty solution was to set the instance to use USB 1.1. This setting is changed under Machine -> Settings -> USB -> Select USB 1.1 OHCI Controller.. After that change the instance now boots and runs the installer.

virtualbox-qt-5.1.34-47.1.x86_64
virtualbox-5.1.34-47.1.x86_64
virtualbox-host-kmp-default-5.1.34_k4.4.120_45-47.1.x86_64
kernel-default-4.4.120-45.1.x86_64
openSUSE 42.3 (x86_64)

by whitemice at April 03, 2018 12:21 PM

March 11, 2018

Whitemice Consulting

AWESOME: from-to Change Log viewer for PostgreSQL

Upgrading a database is always a tedious process - a responsible administrator will have to read through the Changelog for every subsequent version from the version ze is upgrading from to the one ze is upgrading to.

Then I found this! This is a Changelog viewer which allows you to select a from and a to version and shows you all the changelogs in between; on one page. You still have to read it, of course, but this is a great time saver.

by whitemice at March 11, 2018 01:15 AM

February 09, 2018

As it were ...

Get Array neighbors in PHP

I recently had an issue where I had a custom post type of Artist, and another of Artwork. When looking at a single piece of Artwork, I used posts2posts to get the related Artist, and then I also did a query to get an array of all of the other Artwork by that Artist. I used that array to render them as thumbnails below the main Artwork.

The related Artwork array really isn’t sorted in any way. It’s a standard post array, with incremental keys.

I needed to put links on the page to Previous and Next Artworks, like this:

Initially I used WordPress’ built in functions for previous and next post, but that relied on the chronology of all Artworks, irrespective of Artist, so they immediately left the current Artist and went to something unrelated.

To get the array I wanted, I took my standard posts array and did this:

// get a list of all of the IDs of that other art
$art_list = wp_list_pluck( $connected_art, 'post_title', 'ID' );

which got me a very concise array of array keys matching my post IDs. The post_title is a red herring, I don’t use it.

I needed to take my Art array and get the ID of the post on either side of the current Artwork. I looked at prev() and next() but messing with the array pointer doesn’t work in a for loop, so it was a pain.

I found some code in the comments for the next() function that came close to what I wanted, but left some things to be desired. So I used it as a base and ended up with the function below.

/**
 * Function to get array keys on either side of a given key. If the
 * initial key is first in the array then prev is null. If the initial
 * key is last in the array, then next is null.
 *
 * If wrap is true and the initial key is last, then next is the first
 * element in the array.
 *
 * If wrap is true and the initial key is first, then prev is the last
 * element in the array.
 *
 * @param array $arr
 * @param string $key
 * @param bool $wrap
 *
 * @return array $return
 */
function array_neighbor( $arr, $key, $wrap = false ) {

	krsort( $arr );
	$keys       = array_keys( $arr );
	$keyIndexes = array_flip( $keys );

	$return = array();
	if ( isset( $keys[ $keyIndexes[ $key ] - 1 ] ) ) {
		$return['prev'] = $keys[ $keyIndexes[ $key ] - 1 ];
	} else {
		$return['prev'] = null;
	}

	if ( isset( $keys[ $keyIndexes[ $key ] + 1 ] ) ) {
		$return['next'] = $keys[ $keyIndexes[ $key ] + 1 ];
	} else {
		$return['next'] = null;
	}

	if ( false != $wrap && empty( $return['prev'] ) ) {
		$end            = end( $arr );
		$return['prev'] = key( $arr );
	}

	if ( false != $wrap && empty( $return['next'] ) ) {
		$beginning      = reset( $arr );
		$return['next'] = key( $arr );
	}

	return $return;
}

Then you get your data with something like this, where $current_art is just the current post ID.

// grab the IDs of the art on either side of this one
$art_neighbors = array_neighbor( $art_list, $current_art, true );

The output looks like this:

Array
(
    [prev] => 2257
    [next] => 2253
)

Those are post IDs, so I was able to simply drop those into get_permalink() for my next/prev links.

The post Get Array neighbors in PHP appeared first on As it were....

by topher at February 09, 2018 02:57 PM

January 17, 2018

Whitemice Consulting

Discovering Informix Version Via SQL

It is possible using the dbinfo function to retrieve the engine's version information via an SQL command:

select dbinfo('version','full') from sysmaster:sysdual

which will return a value like:

IBM Informix Dynamic Server Version 12.10.FC6WE
Tags: 

by whitemice at January 17, 2018 08:56 PM

December 28, 2017

OpenGroupware (Legacy and Coils)

ODBC Support Added To OIE

As of OpenGroupware Coils 0.1.49r112 support for ODBC data sources has been integrated into OIE. These SQL data sources are defined in the OIESQLSources just as PostgreSQL and Informix database connections are. This feature requires the pyodbc module to be installed. The availability of this module on your workflow node can be verified using the coils-dependency-check tool.

[ogo@workflow.coils.example.com ~]# coils-dependency-check 
OK: Module markdown (Markdown rendering, required for /wiki protocol) available.
OK: Module vobject (vCard and vEvent parsing) available.
OK: Module zope.interface (ZOPE Interfaces for RML engine) available.
OK: Module xlrd (XLS<2007 read support) available.
OK: Module pycups (IPP printing support) available.
OK: Module paramiko (SSH suppport.) available.
OK: Module dateutil (Date & Time Arithmatic) available.
OK: Module lxml (SAX & DOM XML Processing) available.
OK: Module Pillow (Python Imaging Library) available.
OK: Module psycopg2 (PostgreSQL RDBMS connectivity) available.
OK: Module base64 (Encode and decode Base64 data) available.
OK: Module yaml (YAML parser & serializer) available.
OK: Module pyodbc (ODBC SQL connectivity) available.   <<<<<<<<<<
OK: Module xlwt (XLS<2007 write support) available.
OK: Module sqlalchemy (Object Relational Modeling) available.
OK: Module pytz (Python Time Zone tables) available.
OK: Module smbc (SMB/CIFS integration) available.
OK: Module argparse (Enhanced argument parsing, required for /wiki protocol) available.
OK: Module ijson (Streaming JSON parser, requires libyajl) available.
OK: Module z3c.rml (RML Generator, also requires "zope.interface") available.
OK: Module coils.foundation.api.elementflow (Streaming XML Creation) available.
OK: Module coils.foundation.api.pypdf (Simple PDF Operations) available.
OK: Module untangle (XML parsing) available.
OK: Module gnupg (GPG/PGP suppport.) available.
OK: Module informixdb (Informix RDBMS connectivity) available.

The principle use for the ODBC connection is to connect to M$-SQL database engines. In order to make ODBC connections the proper ODBC driver must be installed on the node and properly configured.

ODBC database connections are defined in the OIESQLSources configuration directive just as with PostgreSQL and Informix database connections. The driver must be "odbc" and the parameter "DSN" be the complete ODBC connection string.

coils-server-config --directive=OIESQLSources  --value='{
....
  "acumaticaMVP1": {"driver": "odbc",
                    "DSN": "DSN=AcumaticaDB;UID=oie-workflow-account;PWD=XXXXXXXXXXXXXXXXXXXXXXXXXX"}
}'

A defined connection can be tested using coils-test-sql utility. This test is best performed form the node hosting the coils.workflow.executor component.

[ogo@workflow.coils.example.com ~]# coils-test-sql --name=acumaticaMVP1 --table=ARInvoice
Store root is /var/lib/opengroupware.org
Connected to SQL "acumaticaMVP1"
  Select Table: "ARInvoice"

If the connection works then it is ready to be used from workflow actions like sqlSelectAction and sqlExecuteAction.

by whitemice at December 28, 2017 02:43 PM

October 09, 2017

Whitemice Consulting

Failure to apply LDAP pages results control.

On a particular instance of OpenGroupware Coils the switch from an OpenLDAP server to an Active Directory service - which should be nearly seamless - resulted in "Failure to apply LDAP pages results control.". Interesting, as Active Directory certainly supports paged results - the 1.2.840.113556.1.4.319 control.

But there is a caveat! Of course.

Active Directory does not support the combination of the paged control and referrals in some situations. So to reliably get the page control enable it is also necessary to disable referrals.

...
dsa = ldap.initialize(config.get('url'))
dsa.set_option(ldap.OPT_PROTOCOL_VERSION, 3)
dsa.set_option(ldap.OPT_REFERRALS, 0)
....

Disabling referrals is likely what you want anyway, unless you are going to implement referral following. Additionally, in the case of Active Directory the referrals rarely reference data which an application would be interested in.

The details of Active Directory and pages results + referrals can be found here

by whitemice at October 09, 2017 03:03 PM

August 31, 2017

Whitemice Consulting

opensuse 42.3

Finally got around to updating my work-a-day laptop to openSUSE 42.3. As usual I did an in-place distribution update via zypper. This involves replacing the previous version repositories with the current version repositories - and then performing a dup. And as usual the process was quick and flawless. After a reboot everything just-works and I go back to doing useful things. This makes for an uninteresting BLOG post, which is as it should be.

zypper lr --url
zypper rr http-download.opensuse.org-f7da6bb3
zypper rr packman
zypper rr repo-non-oss
zypper rr repo-oss
zypper rr repo-update-non-oss
zypper rr repo-update-oss
zypper rr server:mail
zypper ar http://download.opensuse.org/distribution/leap/42.3/repo/non-oss/ repo-non-oss
zypper ar http://download.opensuse.org/distribution/leap/42.3/repo/oss/ repo-oss
zypper ar http://download.opensuse.org/repositories/server:/mail/openSUSE_Leap_42.3/ server:mail
zypper ar http://download.opensuse.org/update/leap/42.3/non-oss/ repo-update-non-oss
zypper ar http://download.opensuse.org/update/leap/42.3/oss/ repo-update-oss
zypper ar http://ftp.gwdg.de/pub/linux/misc/packman/suse/openSUSE_Leap_42.3 packman
zypper lr --url  # double check
zypper ref  # refesh
zypper dup --download-in-advance  # distribution update
zypper up  # update, just a double check
reboot

Done.

by whitemice at August 31, 2017 12:49 PM

August 07, 2017

OpenGroupware (Legacy and Coils)

An Introduction to OIE Tables

The OIE Table entity provides a simple means to embed look-ups, filters, and translations into workflows. The principle of a Table is that it always receives a value and returns a value - a look-up.

Table definitions are presented via WebDAV in the /dav/Workflow/Tables folder as simple YAML files; they can be created and edited using your favorite text editor. If you are familiar with OIE Format definitions the Table definition should seem very familiar. Tables are identified by their unique name which is specified by the name attribute of their YAML description.

StaticLookupTable

The static look-up table provides a method to do simple recoding of data without relying on external data-sources such as an LDAP DSA or SQL RDBMS. The definition of a StaticLookupTable provides a values dictionary where input values are looked up and the corresponding value returned. The optional defaultValue directive may specify a value to be returned if the input value is not found in the values table; if no defaultValue is specified the table will return a None.

class: StaticLookupTable
defaultValue: 9
values: { 'ME1932': 4,
          'Kalamazoo': 'abc' }
name: TestStaticLookupTable

Text 1: A StaticLookupTable that returns 4 for the input value "ME1932", and "abc" for the input value "Kalamazoo". Any other input value results in the value 9.

PresenceLookupTable

A presence look-up table contains a list of static values. It returns a specified value if the input value matches one of the values stored in the table; otherwise it returns an alternative value. Presence look-up tables are most commonly used when a small and known set of values needs to used to filter a set of data.

class: PresenceLookupTable
name: BankeCodeExclusionTable
returnValueForFalse: true
returnValueForTrue: false
values: [ME1932, Kalamazoo, 123]

Text 2: A PresenceLookupTable that returns boolean false for the input values "ME1932", "Kalamazo", and 123; returns boolean true for all other input values.

SQLLookupTable

An SQLLookupTable permits the translation or look-up of values using an SQL data source defined in the OIESQLSources server default. The table definition must at the minimum define SQLQueryText and SQLDataSourceName directives. Within the SQLQueryText value the "?" is substituted for the input value; the first column of the query result is the return value of the table. If the query identifies no rows then a None value is returned from the table.

SQLDataSourceName: mydbconnection
SQLQueryText: 'SELECT CASE WHEN COUNT(*) = 0 THEN
    ''True'' ELSE ''False'' END  FROM bank_code_exclusion WHERE bank_code
    = ? AND ex_service_followup = ''Y'';'
class: SQLLookupTable
doInputStrip: true
doInputUpper: true
doOutputStrip: false
doOutputUpper: false
name: ServiceFollowUpExclusionTable,
useSessionCache: true

Text 3: An example SQLLookup table which uses the data-source "mydbconnection" as defined in the OIESQLDataSources server default.

The optional directives: doInputStrip, doInputUpper, doOutputStrip, and doOuputUpper, which all default to false, allow the input and the output values to be changed to upper case and stripped of white-space. Converting a value to upper case may be useful in the case where a database backend itself does not support case-insensitive compare. Trimming whitespace on input values can protect from attempting to look-up padded strings and output trimming is useful for database engines that always return strings values defined like CHAR(30) as padded values.

Using Tables

In Python code using a table is as simple as loading the class and calling the lookup_value method. However the Table performs the look-up is entirely encapsulated in appropriate Table class [SQLLookupTable, StaticLookupTable, ...]

table = Table.Load(name)
return table.lookup_value(value)

Text 4: How to use a table to look-up values in Python code.

More commonly Table lookups are going to be performed within workflow actions such as maps and transforms. When performing an XSLT transform using any table is available via the tablelookup OIE extension method; this allows values from the input stream to be easily used as lookup-values facilitating translation of ERP and other codes/abbreviations between disparate applications.

<xsl:template match="row">
</xsl:template>
<xsl:if test="total_charges>1000">
</xsl:if>
<xsl:variable name="include" select="oie:tablelookup('ServiceFollowUpExclusionTable',string(bank_code))"/>
<xsl:if test="$include='True'">
<row>

Text 5: This snippet of an XSLT transform demonstrates how to use a Table lookup from with an stylesheet.

Overall Tables provide a simple and elegant way to automate all the codes that need to be inserted and translated in the wide variety of documents processed by the workflow engine as well as providing a means to easily implement dynamic filtering.

Author: Adam Tauno Williams

by whitemice at August 07, 2017 10:25 AM

Invoking an OIE Route from PHP

The repository not contains a PHP class making it simple to invoke an OIE workflow from PHP. See the oie.php file. Using the OIEProcess class defined in the file processes can be created and the process id and input message UUID known.

$HTTPROOT   = "http://coils.example.com";
$ROUTENAME  = "TEST_MailBack";
$PARAMETERS = array('myParameter'=>'YOYO MAMA', 'otherParam'=>4);
$request = new OIEProcess($HTTPROOT, $ROUTENAME, $PARAMETERS);
if ($request->start('adam', '*******', fopen('/etc/passwd', 'r'), 'text/plain') == 'OK') {
    echo "\n";
    echo "Process ID: " . $request->get_process_id() . "\n";
    echo "Message UUID: " . $request->get_message_id() . "\n";    
}

The start method returns either "OK", "OIEERR" (OIE refused the request), or "SYSERR" (the curl operation failed). The first and second parameters for start is the user credentials, the optional third and fourth parameters is the input message stream and the payload mimetype. If no mimetype is specified a default of "application/octet-stream" is assumed.

Author: Adam Tauno Williams

by whitemice at August 07, 2017 10:10 AM

The SMTP Listener

Similar to the coils.workflow.9100 service that can deliver raw socket connections into defined workflows OpenGroupware Coils also provides an SMTP listener. The listener enables workflows to receive messages via SMTP; simply configure your MTA (Mail Transfer Agent) to route some prefix such as "ogo" to your OpenGroupware Coils instance and then use plussed address syntax to deliver e-mail messages to specific objects.

Workflows can be invoked using ogo+wf+routeName@ syntax; for example to send an e-mail message to the workflow named ExampleStatusUpdate a message would be sent to ogo+wf+examplestatusupdate@example.com. In the case of a workflow the text/plain body of the message will become the input message for a new instance (Process) of that route. A ticket is open to implement support for receiving specific MIME-types attached to a message as the process' input message.

To target an entity with a message, assuming your delivery is routing ogo@ to the OpenGroupware Coils listener, send a message to ogo+objectId@example.com where objectId is the numeric object id of the entity. In most cases the entity must allow read access to the NetworkContext (OGo#8999) via an ACE in the object's ACL. OpenGroupware Coils network service components interact with the object model with the NetworkContext, this security context has minimal access to the server's objects and content for obvious security reasons. Additional access must be deliberately granted to allow unathenticated services such as the socket and SMTP listener to interact with an object.

Document folders are one entity that supports receiving messages from the SMTP listener. In order to access the folder the NetworkContext must have read access to the folder entity and in order to actually store content in the folder it must have write access. For this reason it is recommended that a specific folder be created in a project for the purpose of receiving SMTP messages; from that folder a user, application, or workflow can relocate and possibly rename the documents.

For example, a message sent to ogo+1234567@example.com, where OGo#1234567 is a document folder to which NetworkContexts has read/write permissions, will be stored in its raw form in that folder. Most document-oriented applications however cannot easily deal with raw e-mail messages [after all, they aren't e-mail clients]. Perhaps what you really need is some document that is attached to these e-mail messages? This is a common use case with document centers - they scan documents into PDF and delivery them via SMTP. In order to facilitate this use-case and to streamline document management the user or application can define the MIME type of the documents the folder should receive. If a MIME type is defined for SMTP collection on a folder than only that type of document, attached to a received message, will be saved - the attachments will be automatically saved from the message and the message itself discarded.

In order to define a MIME-type for SMTP collection on a folder create an object property in the http://www.opengroupware.us/smtp namespace having an attribute name of collectMIMEType. The value of that property should be the MIME-type you desire to collection. For example, if {http://www.opengroupware.us/smtp}collectMIMEType was defined on OGo#1234567 [from our previous example] having a value of "application/pdf" then only PDF attachments would saved to the folder. There are two special-case MIME-types:

  • message/rfc822 - This is the default type, and just as if the object property were not defined, will cause incoming messages to be saved in their entirety.
  • text/plain - This value will save the text/plain message body as a document in the folder.

On every document created by the SMTP listener a set of object properties will be created. These properties correspond to headers in the e-mail message from which the document was created; if a corresponding header does not exist in the e-mail message than the corresponding object property will not be created. The SMTP listener defines a set of interesting headers; if you believe there are headers that should be captured but are not included in this list feel free to request the addition of the header via the projects' ticket application of SourceForge.

The currently defined list of object properties created from message headers are:

  • {us.opengroupware.mail.header}subject
  • {us.opengroupware.mail.header}x-spam-level
  • {us.opengroupware.mail.header}from
  • {us.opengroupware.mail.header}to
  • {us.opengroupware.mail.header}date
  • {us.opengroupware.mail.header}x-spam-status
  • {us.opengroupware.mail.header}reply-to
  • {us.opengroupware.mail.header}x-virus-scanned
  • {us.opengroupware.mail.header}x-bugzilla-classification
  • {us.opengroupware.mail.header}x-bugzilla-product
  • {us.opengroupware.mail.header}x-bugzilla-component
  • {us.opengroupware.mail.header}x-bugzilla-severity
  • {us.opengroupware.mail.header}x-bugzilla-status
  • {us.opengroupware.mail.header}x-bugzilla-url
  • {us.opengroupware.mail.header}x-mailer
  • {us.opengroupware.mail.header}x-original-sender
  • {us.opengroupware.mail.header}mailing-list
  • {us.opengroupware.mail.header}list-id
  • {us.opengroupware.mail.header}x-opengroupware-regarding
  • {us.opengroupware.mail.header}x-opengroupware-objectid
  • {us.opengroupware.mail.header}x-original-to
  • {us.opengroupware.mail.header}in-reply-to
  • {us.opengroupware.mail.header}cc
  • {us.opengroupware.mail.header}x-gm-message-state
  • {us.opengroupware.mail.header}message-id

All documents created will have at least the property {us.opengroupware.mail.header}message-id as Message-ID is a required header [per RFC822]. The SMTP component will not process a message that lacks a Message-ID header. The Message-ID and a timestamp are used to create the documents filename.

In addition to these properties the property {http://www.opengroupware.us/mswebdav}contentType used by the WebDAV presentation will also be set on created documents to store the original MIME-type.

These properties can be used to correlate or qualify the documents, and [of course] can be used as search qualifications when using zOGI's searchForObjects.

Document creation by SMTP provides for a very simple integration path with innumerable both consumer and enterprise level devices. From there your applications can easily access the documents by zOGI (JSON-RPC or XML-RPC), AttachFS (REST), or WebDAV.

Author: Adam Tauno Williams

by whitemice at August 07, 2017 10:06 AM

June 06, 2017

Whitemice Consulting

LDAP Search For Object By SID

All the interesting objects in an Active Directory DSA have an objectSID which is used throughout the Windows subsystems as the reference for the object. When using a Samba4 (or later) domain controller it is possible to simply query for an object by its SID, as one would expect - like "(&(objectSID=S-1-...))". However, when using a Microsoft DC searching for an object by its SID is not as straight-forward; attempting to do so will only result in an invalid search filter error. Active Directory stores the objectSID as a binary value and one needs to search for it as such. Fortunately converting the text string SID value to a hex string is easy: see the guid2hex(text_sid) below.

import ldap
import ldap.sasl
import ldaphelper

PDC_LDAP_URI = 'ldap://pdc.example.com'
OBJECT_SID = 'S-1-5-21-2037442776-3290224752-88127236-1874'
LDAP_ROOT_DN = 'DC=example,DC=com'

def guid2hex(text_sid):
    """convert the text string SID to a hex encoded string"""
    s = ['\\{:02X}'.format(ord(x)) for x in text_sid]
    return ''.join(s)

def get_ldap_results(result):
    return ldaphelper.get_search_results(result)

if __name__ == '__main__':

    pdc = ldap.initialize(PDC_LDAP_URI)
    pdc.sasl_interactive_bind_s("", ldap.sasl.gssapi())
    result = pdc.search_s(
        LDAP_ROOT_DN, ldap.SCOPE_SUBTREE,
        '(&(objectSID={0}))'.format(guid2hex(OBJECT_SID), ),
        [ '*', ]
    )
    for obj in [x for x in get_ldap_results(result) if x.get_dn()]:
        """filter out objects lacking a DN - they are LDAP referrals"""
        print('DN: {0}'.format(obj.get_dn(), ))

    pdc.unbind()

by whitemice at June 06, 2017 12:11 AM

April 08, 2017

As it were ...

Why I no longer hate GoDaddy

There was a time when I said “never GoDaddy”. I turned down contracts when the client wanted to be hosted on GoDaddy, and wouldn’t budge. Over the last few years my attitude has changed pretty dramatically. I’m happy to work with GoDaddy now, and I like what they’re doing as a company.

Recently a friend tweeted this:

That is absolutely a fair question, and I think one that deserves a better answer than a tweet back, so this post is intended to be that answer.

Why I Didn’t Like GoDaddy

My primary reason was their choice to use sex as a marketing tool. Every commercial made me cringe. I felt so sad that NASCAR’s first serious female contender was cast as someone sexy rather than someone with amazing accomplishments. There was so much opportunity there to inspire young women and girls with the idea that they can break cultural norms.

A secondary reason was the lifestyle of the owner. He simply made choices I don’t like. Lots of people do, and that’s fine, but I made the choice not to use his product.

There were also some tech issues I didn’t like.  For a long time you couldn’t get shell for example. That annoyed me like crazy.

Lastly, they were the biggest player. I always root for the underdog.

What Changed

The real change came when key people inside GoDaddy decided the company was doing harmful things, and decided to do something about it. The owner sold the company and took a smaller and smaller role in controlling the company until he was simply gone.

At that point the opportunity existed to take a higher road, and they did it. The sex came out of the commercials. There are now more women than men in positions of authority inside the company.

In general things have really turned around.

What Doesn’t Matter

I recently heard someone bad mouth GoDaddy, and then someone else jump in and say “How can you hate GoDaddy?  Mendel Kurland is such a cool guy!” For the unaware, Mendel works there. And he is a cool guy, I like him a lot. I have other friends that work there too.

None of that matters. My beef wasn’t with individual people there, but corporate direction.

So Everything’s Perfect?

No. There are still things I don’t like about GoDaddy. But those things are in the same class as things I don’t like about every host as well. They’re not using protocol X, or they meddle too much in the site creation, or whatever. They’re not anything that I would feel like I need to apologize to my daughter for.

In Summary

In the past I’ve been vocal about “never GoDaddy”. I’m not that way anymore.

The post Why I no longer hate GoDaddy appeared first on As it were....

by topher at April 08, 2017 10:13 PM

March 07, 2017

Whitemice Consulting

KDC reply did not match expectations while getting initial credentials

Occasionally one gets reminded of something old.

[root@NAS04256 ~]# kinit adam@example.com
Password for adam@Example.Com: 
kinit: KDC reply did not match expectations while getting initial credentials

Huh.

[root@NAS04256 ~]# kinit adam@EXAMPLE.COM
Password for adam@EXAMPLE.COM:
[root@NAS04256 ~]# 

In some cases the case of the realm name matters.

by whitemice at March 07, 2017 02:18 PM

February 09, 2017

Whitemice Consulting

The BOM Squad

So you have a lovely LDIF file of Active Directory schema that you want to import using the ldbmodify tool provided with Samba4... but when you attempt the import it fails with the error:

Error: First line of ldif must be a dn not 'dn'
Modified 0 records with 0 failures

Eh? @&^$*&;@&^@! It does start with a dn: attribute it is an LDIF file!

Once you cool down you look at the file using od, just in case, and you see:

0000000   o   ;   ?   d   n   :  sp   c   n   =   H   o   r   d   e   -

The first line does not actually begin with "dn:" - it starts with the "o;?". You've been bitten by the BOM! But even opening the file in vi you cannot see the BOM because every tool knows about the BOM and deals with it - with the exception of anything LDIF related.

The fix is to break out dusty old sed and remove the BOM -

sed -e '1s/^\xef\xbb\xbf//' horde-person.ldf  > nobom.ldf

And double checking it with od again:

0000000   d   n   :  sp   c   n   =   H   o   r   d   e   -   A   g   o

The file now actually starts with a "dn" attribute!

by whitemice at February 09, 2017 12:09 PM

Installation & Initialization of PostGIS

Distribution: CentOS 6.x / RHEL 6.x

If you already have a current version of PostgreSQL server installed on your server from the PGDG repository you should skip these first two steps.

Enable PGDG repository

curl -O http://yum.postgresql.org/9.3/redhat/rhel-6-x86_64/pgdg-centos93-9.3-1.noarch.rpm
rpm -ivh pgdg-centos93-9.3-1.noarch.rpm

Disable all PostgreSQL packages from the distribution repositories. This involves editing the /etc/yum.repos.d/CentOS-Base.repo file. Add the line "exclude=postgresql*" to both the "[base]" and "[updates]" stanzas. If you skip this step everything will appear to work - but in the future a yum update may break your system.

Install PostrgreSQL Server

yum install postgresql93-server

Once installed you need to initialize and start the PostgreSQL instance

service postgresql-9.3 initdb
service postgresql-9.3 start

If you wish the PostgreSQL instance to start with the system at book use chkconfig to enable it for the current runlevel.

chkconfig postgresql-9.3 on

The default data directory for this instance of PostgreSQL will be "/var/lib/pgsql/9.3/data". Note: that this path is versioned - this prevents the installation of a downlevel or uplevel PostgreSQL package destroying your database if you do so accidentally or forget to follow the appropriate version migration procedures. Most documentation will assume a data directory like "/var/lib/postgresql" [notably unversioned]; simply keep in mind that you always need to contextualize the paths used in documentation to your site's packaging and provisioning. Enable EPEL Repository

The EPEL repository provides a variety of the dependencies of the PostGIS packages provided by the PGDG repository.

curl -O http://epel.mirror.freedomvoice.com/6/x86_64/epel-release-6-8.noarch.rpm
rpm -Uvh epel-release-6-8.noarch.rpm

Installing PostGIS

The PGDG package form PostGIS should now install without errors.

yum install postgis2_93

If you do not have EPEL successfully enables when you attempt to install the PGDG PostGIS packages you will see dependency errors.

--->; Package postgis2_93-client.x86_64 0:2.1.1-1.rhel6 will be installed
--> Processing Dependency: libjson.so.0()(64bit) for package: postgis2_93-client-2.1.1-1.rhel6.x86_64
--> Finished Dependency Resolution
Error: Package: gdal-libs-1.9.2-4.el6.x86_64 (pgdg93)
           Requires: libcfitsio.so.0()(64bit)
Error: Package: gdal-libs-1.9.2-4.el6.x86_64 (pgdg93)
           Requires: libspatialite.so.2()(64bit)
Error: Package: gdal-libs-1.9.2-4.el6.x86_64 (pgdg93)
...

Initializing PostGIS

The template database "template_postgis" is expected to exist by many PostGIS applications; but this database is not created automatically.

su - postgres
createdb -E UTF8 -T template0 template_postgis
-- ... See the following note about enabling plpgsql ...
psql template_postgis
psql -d template_postgis -f /usr/pgsql-9.3/share/contrib/postgis-2.1/postgis.sql
psql -d template_postgis -f /usr/pgsql-9.3/share/contrib/postgis-2.1/spatial_ref_sys.sql 

Using the PGDG packages the PostgreSQL plpgsql embedded language, frequently used to develop stored procedures, is enabled in the template0 database from which the template_postgis database is derived. If you are attempting to use other PostgreSQL packages, or have built PostgreSQL from source [are you crazy?], you will need to ensure that this language is enabled in your template_postgis database before importing the scheme - to do so run the following command immediately after the "createdb" command. If you see the error stating the language is already enabled you are good to go, otherwise you should see a message stating the language was enabled. If creating the language fails for any other reason than already being enabled you must resolve that issue before proceeding to install your GIS applications.

$ createlang -d template_postgis plpgsql
createlang: language "plpgsql" is already installed in database "template_postgis"

Celebrate

PostGIS is now enabled in your PostgreSQL instance and you can use and/or develop exciting new GIS & geographic applications.

by whitemice at February 09, 2017 11:43 AM

February 03, 2017

Whitemice Consulting

Unknown Protocol Drops

I've seen this one a few times and it is always momentarily confusing: on an interface on a Cisco router there is a rather high number of "unknown protocol drops". What protocol could that be?! Is it some type of hack attempt? Ambitious if they are shaping there own raw packets onto the wire. But, no, the explanation is the much less exciting, and typical, lazy ape kind of error.

  5 minute input rate 2,586,000 bits/sec, 652 packets/sec
  5 minute output rate 2,079,000 bits/sec, 691 packets/sec
     366,895,050 packets input, 3,977,644,910 bytes
     Received 15,91,926 broadcasts (11,358 IP multicasts)
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog
     0 input packets with dribble condition detected
     401,139,438 packets output, 2,385,281,473 bytes, 0 underruns
     0 output errors, 0 collisions, 3 interface resets
     97,481 unknown protocol drops  <<<<<<<<<<<<<<
     0 babbles, 0 late collision, 0 deferred

This is probably the result of CDP (Cisco Discovery Protocol) being enabled on one interface on the network and disabled in this interface. CDP is the unknown protocol. CDP is a proprietary Data Link layer protocol, that if enabled, sends an announcement out the interface every 60 seconds. If the receiving end gets the CDP packet and has "no cdp enable" in the interface configuration - those announcements count as "unknown protocol drops". The solution is to make the CDP settings, enabled or disabled, consistent on every device in the interface's scope.

by whitemice at February 03, 2017 06:32 PM

Screen Capture & Recording in GNOME3

GNOME3, aka GNOME Shell, provides a comprehensive set of hot-keys for capturing images from your screen as well as recording your desktop session. These tools are priceless for producing documentation and reporting bugs; recording your interaction with an application is much easier than describing it.

  • Alt + Print Screen : Capture the current window to a file
  • Ctrl + Alt + Print Screen : Capture the current window to the cut/paste buffer
  • Shift + Print Screen : Capture a selected region of the screen to a file
  • Ctrl + Shift + Print Screen : Capture a selected region of the screen to the cut/paste buffer
  • Print Screen : Capture the entire screen to a file
  • Ctrl + Print Screen : Capture the entire screen to the cut/paste buffer
  • Ctrl + Alt + Shift + R : Toggle screencast recording on and off.

Recorded video is in WebM format (VP8 codec, 25fps). Videos are saved to the ~/Videos folder and image files are saved in PNG format into the ~/Pictures folder. When screencast recording is enabled there will be a red recording indicator in the bottom right of the screen, this indicator will disappear one screencasting is toggled off again.

by whitemice at February 03, 2017 06:29 PM

Converting a QEMU Image to a VirtualBox VDI

I use VirtualBox for hosting virtual machines on my laptop and received a Windows 2008R2 server image from a consultant as a compressed QEMU image. So how to convert the QEMU image to a VirtualBox VDI image?

Step#1: Convert QEMU image to raw image.

Starting with the file WindowsServer1-compressed.img (size: 5,172,887,552)

Convert the QEMU image to a raw/dd image using the qemu-img utility.

emu-img convert  WindowsServer1-compressed.img  -O raw  WindowsServer1.raw

I now have the file WindowsServer1.raw (size: 21,474,836,480)

Step#2: Convert the RAW image into a VDI image using the VBoxManage tool.

VBoxManage convertfromraw WindowsServer1.raw --format vdi  WindowsServer1.vdi
Converting from raw image file="WindowsServer1.raw" to file="WindowsServer1.vdi"...
Creating dynamic image with size 21474836480 bytes (20480MB)...

This takes a few minutes, but finally I have the file WindowsServer1.vdi (size: 14,591,983,616)

Step#3: Compact the image

Smaller images a better! It is likely the image is already compact; however this also doubles as an integrity check.

VBoxManage modifyhd WindowsServer1.vdi --compact
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%

Sure enough the file is the same size as when we started (size: 14,591,983,616). Upside is the compact operation went through the entire image without any errors.

Step#4: Cleanup and make a working copy.

Now MAKE A COPY of that converted file and use that for testing. Set the original as immutable [chattr +i] to prevent that being used on accident. I do not want to waste time converting the original image again.

Throw away the intermediate raw image and compress the image we started with for archive purposes.

rm WindowsServer1.raw 
cp WindowsServer1.vdi WindowsServer1.SCRATCH.vdi 
sudo chattr +i WindowsServer1.vdi
bzip2 -9 WindowsServer1-compressed.img 

The files at the end:

File Size
WindowsServer1-compressed.img.bz2 5,102,043,940
WindowsServer1.SCRATCH.vdi 14,591,983,616
WindowsServer1.vdi 14,591,983,616

Step#5

Generate a new UUID for the scratch image. This is necessary anytime a disk image is duplicated. Otherwise you risk errors like "Cannot register the hard disk '/archive/WindowsServer1.SCRATCH.vdi' {6ac7b91f-51b6-4e61-aa25-8815703fb4d7} because a hard disk '/archive/WindowsServer1.vdi' with UUID {6ac7b91f-51b6-4e61-aa25-8815703fb4d7} already exists" as you move images around.

VBoxManage internalcommands sethduuid WindowsServer1.SCRATCH.vdi
UUID changed to: ab9aa5e0-45e9-43eb-b235-218b6341aca9

Generating a unique UUID guarantees that VirtualBox is aware that these are distinct disk images.

Versions: VirtualBox 5.1.12, QEMU Tools 2.6.2. On openSUSE LEAP 42.2 the qemu-img utility is provided by the qemu-img package.

by whitemice at February 03, 2017 02:36 PM

January 24, 2017

Whitemice Consulting

XFS, inodes, & imaxpct

Attempting to create a file on a large XFS filesystem - and it fails with an exception indicating insufficient space! There is available blocks - df says so. HUh? While, unlike traditional UNIX filesystems, XFS doesn't suffer from the boring old issue of "inode exhaustion" it does have inode limits - based on a percentage of the filesystem size.

linux-yu4c:~ # xfs_info /mnt
meta-data=/dev/sdb1              isize=256    agcount=4, agsize=15262188 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=61048752, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=29808, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0

The key is that "imaxpct" value. In this example inode's are limited to 25% of the filesystems capacity. That is a lot of inodes! But some tools and distributions may default that percentage to some much lower value - like 5% or 10% (for what reason I don't know). This value can be determined at filesystem creation time using the "-i maxpct=nn" option or adjusted later using the xfs_growfs command's "-m nn" command. So if you have an XFS filesystem with available capacity that is telling you it is full: check your "imaxpct" value, then grow the inode percentage limit.

by whitemice at January 24, 2017 07:59 PM

Changing FAT Labels

I use a lot of SD cards and USB thumb-drives; when plugging in these devices automount in /media as either the file-system label (if set) or some arbitrary thing like /media/disk46. So how can one modify or set the label on an existing FAT filesystem? Easy as:

mlabel -i /dev/mmcblk0p1 -s ::WMMI06  
Volume has no label 
mlabel -i /dev/mmcblk0p1  ::WMMI06
mlabel -i /dev/mmcblk0p1 -s :: 
Volume label is WMMI06

mlabel -i /dev/sdb1 -s ::
Volume label is Cruzer
mlabel -i /dev/sdb1  ::DataCruzer
mlabel -i /dev/sdb1 -s ::
Volume label is DataCruzer (abbr=DATACRUZER )

mlabel is provided by the mtools package. Since we don't have a drive letter the "::" is used to defer to the actual device specified using the "-i" directive. The "-s" directive means show, otherwise the command attempts to set the label to the value immediately following (no whitespace!) the drive designation [default behavior is to set, not show].

by whitemice at January 24, 2017 07:51 PM

Deduplicating with group_by, func.min, and having

You have a text file with four million records and you want to load this data into a table in an SQLite database. But some of these records are duplicates (based on certain fields) and the file is not ordered. Due to the size of the data loading the entire file into memory doesn't work very well. And due to the number of records doing a check-at-insert when loading the data is also prohibitively slow. But what does work pretty well is just to load all the data and then deduplicate it. Having an auto-increment record id is what makes this possible.

class VendorSKU(scratch_base):
    __tablename__ = 'sku'
    id      = Column(Integer, primary_key=True, autoincrement=True)
...

Once all the data gets loaded into the table the deduplication is straight-forward using minimum and group by.

query = scratch.query(
    func.min( VendorCross.id ),
    VendorCross.sku,
    VendorCross.oem,
    VendorCross.part ).filter(VendorCross.source == source).group_by(
        VendorCross.sku,
        VendorCross.oem,
        VendorCross.part ).having(
            func.count(VendorCross.id) > 1 )
counter = 0
for (id, sku, oem, part, ) in query.all( ):
    counter += 1
    scratch.query(VendorCross).filter(
        and_(
            VendorCross.source == source, 
            VendorCross.sku == sku,
            VendorCross.oem == oem,
            VendorCross.part == part,
            VendorCross.id != id ) ).delete( )
    if not (counter % 1000):
        # Commit every 1,000 records, SQLite does not like big transactions
        scratch.commit()
scratch.commit()

This incantation removes all the records from each group except for the one with the lowest id. The trick for good performance is to batch many deletes into each transaction - only commit every so many [in this case 1,000] groups processed; just also remember to commit at the end to catch the deletes from the last iteration.

by whitemice at January 24, 2017 07:45 PM

AIX Printer Migration

There are few things in IT more utterly and completely baffling than the AIX printer subsystem.  While powerful it accomplishes its task with more arcane syntax and scattered settings files than anything else I have encountered. So the day inevitably comes when you face the daunting task of copying/recreating several hundred print queues from some tired old RS/6000 we'll refer to as OLDHOST to a shiny new pSeries known here as NEWHOST.  [Did you know the bar Stellas in downtown Grand Rapids has more than 200 varieties of whiskey on their menu?  If you've dealt with AIX's printing subsystem you will understand the relevance.] To add to this Sisyphean task the configuration of those printers have been tweaked, twiddled and massaged individually for years - so that rules out the wonderful possibility of saying to some IT minion "make all these printers, set all the settings exactly the same" [thus convincing the poor sod to seek alternate employment, possibly as a bar-tender at the aforementioned Stellas].

Aside: Does IBM really truly not provide a migration technique?  No. Seriously, yeah. 

But I now present to you the following incantation [to use at your own risk]:

scp root@OLDHOST:/etc/qconfig /etc/qconfig
stopsrc -cg spooler
startsrc -g spooler
rsync --recursive --owner --group --perms \
  root@OLDHOST:/var/spool/lpd/pio/@local/custom/ \
  /var/spool/lpd/pio/@local/custom/
rsync --recursive --owner --group --perms  \
  root@OLDHOST:/var/spool/lpd/pio/@local/dev/ \
  /var/spool/lpd/pio/@local/dev/
rsync --recursive --owner --group --perms  \
  root@OLDHOST:/var/spool/lpd/pio/@local/ddi/ \
  /var/spool/lpd/pio/@local/ddi/
chmod 664 /var/spool/lpd/pio/@local/ddi/*
chmod 664 /var/spool/lpd/pio/@local/custom/*
enq -d
cd  /var/spool/lpd/pio/@local/custom
for FILE in `ls`
 do
   /usr/lib/lpd/pio/etc/piodigest $FILE 
 done
chown root:printq /var/spool/lpd/pio/@local/custom/*
chown root:printq /var/spool/lpd/pio/@local/ddi/*
chmod 664 /var/spool/lpd/pio/@local/ddi/*
chmod 664 /var/spool/lpd/pio/@local/custom/*

Execute this sequence on NEWHOST and the print queues and their configurations will be "migrated". 

NOTE#1: This depends on all those print queues being network attached printers.  If the system has direct attached printers that correspond to devices such as concentrators, lion boxes, serial ports, SCSI buses,.... then please do not do this, you are on your own.  Do not call me, we never talked about this.

NOTE#2: This will work once.  If you've then made changes to printer configuration or added/removed printers do not do it again.  If you want to do it again first delete ALL the printers on NEWHOST.  Then reboot, just to be safe.  At least stop and start the spooler service after deleting ALL the printer queues.

NOTE#3: I do not endorse, warranty, or stand behind this method of printer queue migration.  It is probably a bad idea.  But the entire printing subsystem in AIX is a bad idea, sooo.... If this does not work do not call me; we never talked about this.

by whitemice at January 24, 2017 11:46 AM

The source files could not be found.

I have several Windows 2012 VMs in a cloud environment and discovered I am unable to install certain roles / features. Attempting to do so fails with an "The source files could not be found." error. This somewhat misleading messages indicates Windows is looking for the OS install media. Most of the solutions on the Interwebz for working around this error describe how to set the server with an alternate path to the install media ... problem being that these VMs were created from a pre-activated OVF image and there is no install media available from the cloud's library.

Lacking install media the best solution is to set the server to skip the install media and grab the files from Windows Update.

  1. Run "gpedit.msc"
  2. "Local Computer Policy"
  3. "Administrative Templates"
  4. "System"
  5. Enable "Specify settings for optional component installation and component repair"
  6. Check the "Contact Windows Update directory to download repair content instead of Windows Server Update Services (WSUS)"

Due to technical limitations WSUS cannot be utilized for this purpose; which is sad given that there is a WSUS server sitting in the same cloud. :(

by whitemice at January 24, 2017 11:31 AM

January 05, 2017

As it were ...

A Grand Experiment

Well, it’s time for a new job. “What?!?!” you ask. “Didn’t you just get a new job a few months ago?”

Indeed I did. This last August I ended my time with Pippin and moved to Modern Tribe. For a variety of reasons it didn’t work out. No-one’s upset, I still love and respect them, they still like me, it just wasn’t what either of us expected.

So, on to the future.

The plan at this point is to merge my experience as a freelancer with Tanner Moushey’s company and his experience as a freelancer and form a new WordPress agency. We’re doing a short trial period first, just to make sure this is really what we want, but by summer we should have a new company brand etc.

The General Plan

Our goal is freedom, both for ourselves and the people who work for us. This means not being married to the job, or making the job super complicated. We’d like to stay small and flexible, and do relatively small projects. We’re not looking to be a VIP agency or anything.

How You Can Help

If you need any web dev help, let me know. Tell your friends etc. I’m back to taking contracts. This time we’re a team though, which makes for a lot more depth, stability, and security.

This feels so so good, the best I’ve felt about a job since the first time I went 100% freelance.

Thank you for your support.

The post A Grand Experiment appeared first on As it were....

by topher at January 05, 2017 05:22 PM

November 02, 2016

As it were ...

Building a custom Google Map

For about a year now I’ve had a Google map on HeroPress.com showing pins of where my contributors are from. I’ve been using Maps Builder Pro from WordImpress. It’s an excellent plugin, and does many of the things I wanted, but not all of them. Here’s what I was after:

My contributors are a custom content type in WordPress, not just authors. Maps Builder Pro provides a search box in the admin of each contributor to search for a location on Google Maps. Then I simply click the location and it fills in a bunch of meta boxes with data like coordinates, city name, and some unique location data.

I wanted a plugin that would automatically go get all that data, organize people by location, grouping people who are from the same location, and put in one pin per location, with the bubbles showing all the people from that location.

The map I made with Maps Builder Pro let me do most of this, but manually.  I had to keep the map up to date each week, and I was terrible at that.

So I wanted a new plugin, but I dearly love the admin UI for gathering and storing data that Maps Builder Pro provides. So that plugin remains, and I’ll use it that way. I built a new plugin for rendering the map with my requirements.

What I learned

I started with a tutorial by a guy named Ian Wright. It’s excellent, as are all of his maps tutorials. I highly recommend them.

Data Organization

The pins and the contents of the pins are two different data sets in Javascript, and they’re related by order. So pin 1 pairs with content block 1, and pin 42 goes with content block 42.  This means you need to have a content block for every pin, even if it’s empty, so that the 42’s match up properly.

Bounding

Ian’s tutorial uses bounding to set the zoom and center for the map. I didn’t understand that, so when I tried to change it, I failed terribly. Here’s what that all means.

When creating a pin we put in

bounds.extend(position);

which tells the map object the bounds of the pins on a map. Then we put in

map.fitBounds(bounds);

which tells the map to zoom just the right amount so you can see all the pins, and center on the middle of them. This made it so that when I later tried to make a different center with setCenter() it didn’t work.

Additionally, when I removed the fitBounds() function the whole map broke. This is because you MUST use some sort of centering code, and I had neither fitBounds() nor setCenter().

The key was to have a setCenter() and NOT have a fitBounds(). Then I was able to easily have a setZoom as well.

Static Maps

I just found out that you can have the maps API return an image rather than an interactive map.  So you can programmatically make the map, but it loads as fast as an image.  If you don’t need interactivity then it’s a MUCH better way to go.  I’m thinking of putting a small map on each contributor’s page with a single pin, showing where they’re from. It would then link to a google map.

In Summary

I’ve heard a fair number of people whine about how terrible the Google Maps API is, but I really like it.  I don’t know Javascript, and I was able to easily adapt some tutorial code, read the docs to extend it, and make something really slick. I really recommend it.

The post Building a custom Google Map appeared first on As it were....

by topher at November 02, 2016 02:44 AM

October 16, 2016

As it were ...

The Right Stuff

Recently a friend started working on a WordPress plugin. The plugin was scratching an itch, counting the words in a collection of posts and rendering the count in a widget, as an incentive to post regularly. In the process of building the plugin she tweeted quite a bit, about successes, struggles, and frustrations. At one point I sent her some encouragement:

She was right, I hadn’t seen her code.  I’d never seen any of her code. At that point I didn’t know if she could code at all. But I knew she was doing awesome. How?

I could tell from her tweets that she was struggling with things, doing research, overcoming those things, and moving on. Anyone who can complete that process is essentially unstoppable as a developer. That process also works in any other walk of life.

Do you have what it takes to be a WordPress developer? Or any kind of developer? Or anything else in life? If you can confront your struggles head on, find a solution, and move on, you will be unstoppable.

The post The Right Stuff appeared first on As it were....

by topher at October 16, 2016 10:56 PM

October 03, 2016

Whitemice Consulting

Playing With Drive Images

I purchased a copy of Windows 10 on a USB thumbdrive. I chose to have media to have (a) a backup and (b) not to have to bother with downloading a massive image. Primarily this copy of Windows will be used in VirtualBox for testing, using Power Shell, and other tedious system administrivia. First thing when it arrived is I used dd to make a full image of thumbdrive so I could tuck it away in a safe place.

dd if=/dev/sde of=Windows10.Thumbdrive.20160918.dd bs=512

But now the trick is to take that raw image and convert it to a VMDK so that it can be attached to a virtual machine. The VBoxManage command provides this functionality:

VBoxManage internalcommands createrawvmdk -filename Windows10.vmdk -rawdisk Windows10.Thumbdrive.20160918.dd

Now I have a VMDK file. If you do this you will notice the VMDK file is small - it is essentially a pointer to the disk image; the purpose of the VMDK is to provide the meta-data necessary to make the hypervisor (in this case VirtualBox) happy. Upshot of that is that you cannot delete the dd image as it is part of your VMDK.

Note that this dd file is a complete disk image; including the partition table:

awilliam@beast01:/vms/ISOs> /usr/sbin/fdisk -l Windows10.Thumbdrive.20160918.dd
Disk Windows10.Thumbdrive.20160918.dd: 14.4 GiB, 15502147584 bytes, 30277632 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device                            Boot Start      End  Sectors  Size Id Type
Windows10.Thumbdrive.20160918.dd 1 *     2048 30277631 30275584 14.4G  c W95 FAT3

So if I wanted to mount that partition on the host operating system I can do that my calculating the offset and mounting through loopback. The offset to the start of the partition within the drive image is the start multiplied by the sector size: 512 * 2,048 = 1048576. The mount command provides support for offset mounting:

beast01:/vms/ISOs $ sudo mount -o loop,ro,offset=1048576 Windows10.Thumbdrive.20160918.dd /mnt
beast01:/vms/ISOs # ls /mnt
83561421-11f5-4e09-8a59-933aks71366.ini  boot     bootmgr.efi  setup.exe                  x64
autorun.inf                              bootmgr  efi          System Volume Information  x86
beast01:/vms/ISOs $ sudo umount /mnt

If all I wanted was the partition, and not the drive, the same offset logic could be used to lift the partition out of the image into a distinct file:

dd if=Windows10.Thumbdrive.20160918.dd of=Windows10.image bs=512 skip=2048

The "Windows10.image" file could be mounted via loopback without bothering with an offset. It might however be more difficult to get a virtual host to boot from a FAT partition that does not have a partition table.

by whitemice at October 03, 2016 10:43 AM

September 15, 2016

Whitemice Consulting

Some Informix DATETIME/INTERVAL Tips

Determine the DATE of the first day of the current week.

(SELECT TODAY - (WEEKDAY(TODAY)) UNITS DAY FROM systables WHERE tabid=1)

Informix always treats Sunday as day 0 of the week. The WEEKDAY function returns the number of the day of the week as a value of 0 - 6 so subtracting the weekday from current day (TODAY) returns the DATE value of Sunday of the current week.

Determining HOURS between two DATETIME values.

It is all about the INTERVAL data type and its rather odd syntax.

SELECT mpr.person_id, mpr.cn_name, 
  ((SUM(out_time - in_time))::INTERVAL HOUR(9) TO HOUR) AS hours
FROM service_time_card stc
  INNER JOIN morrisonpersonr mpr ON (mpr.person_id = stc.technician_id)
WHERE mpr.person_id IN (SELECT person_id FROM branch_membership WHERE branch_code = 'TSC')
  AND in_time > (SELECT TODAY - (WEEKDAY(TODAY)) UNITS DAY FROM systables WHERE tabid=1)  
GROUP BY 1,2

The "(9)" part of the expression INTERVAL HOUR(9) TO HOUR is key - it allocates lots of room for hours, otherwise any value of more than a trivial number of hours will cause the clearly correct by not helpful SQL -1265 error: "Overflow occurred on a datetime or interval operation". As, in my case I had a highest value of 6,483 hours I needed at least HOUR(4) TO HOUR to avoid the overflow error. HOUR(9) is the maximum - an expression of HOUR(10) results in an unhelpful generic SQL -201: "A syntax error has occurred.". On the other hand HOURS(9) is 114,155 years and some change, so... it is doubtful that is going to be a problem in most applications.

by whitemice at September 15, 2016 07:46 PM

August 08, 2016

As it were ...

July 28, 2016

As it were ...

A new job at Modern Tribe

I’m happy to announce that today is my last day at Sandhills Development (working on Easy Digital Downloads with Pippin), and that Monday will be my first day at Modern Tribe. In this post I hope to answer a few common questions.  🙂

Why? Didn’t you just get a new job?

I’ve been with Pippin for just over a year. I joined his team to be doing things other than development, things like documentation, community involvement etc. At the time I was coming off the high of promoting HeroPress and having a wonderful time Not Developing.

As it turns out, developing is what really excites me in my career. I simply never fell in love with writing docs the way I thought I would.

I’d like to be clear that Pippin is a wonderful boss and his company is a spectacular place to work. We’re parting on very good terms.

What will you be doing at Modern Tribe?

I applied for the job of Lead Developer. I’m not going to drop into that position immediately, that would be foolish until I know the culture and processes better. I’ll be doing whatever they tell me to do. 🙂 Developer is my primary purpose for being there though.

Still going to do HeroPress?

Yep, I do that in my spare time, and Modern Tribe doesn’t have a problem with that. Personally, many of the folks there are big fans of HeroPress.

Additionally, some exciting things are happening around the idea of expanding HeroPress a bit, more on that later.

 

The post A new job at Modern Tribe appeared first on As it were....

by topher at July 28, 2016 11:44 AM

July 23, 2016

As it were ...

Dragons fly

image

image

Caught this little guy on the edge of the grill the other night.

The post Dragons fly appeared first on As it were....

by topher at July 23, 2016 09:12 PM

July 17, 2016

As it were ...

My Birth Day

In the months before my dad died we went through a lot of Stuff. Some of it was his, some was my moms, some from his parents and in-laws.

One of the boxes he showed me held a bunch of diaries from my maternal Grandmother. I never knew they existed, so I started looking through them until I came to 1971. I slowly flipped through until I came to July 17.  Here’s what I found:

It’s a treasure for me to be able to see her handwriting again, to read what she had to say to us, about me.

The post My Birth Day appeared first on As it were....

by topher at July 17, 2016 04:00 AM

July 03, 2016

As it were ...

Honey bee on Lavendar

image

The honey bees are really loving our Lavendar this year.

image

I managed to catch this one on mid flight.

The post Honey bee on Lavendar appeared first on As it were....

by topher at July 03, 2016 01:54 PM

June 21, 2016

As it were ...

WordCamp Europe

A few months ago the owner of HeroPress sent me an email and said “I think you should go to WordCamp Europe for HeroPress, can I cover that for you?” I thought long and hard for about 4 seconds before saying yes!  We decided to throw in some extra so that my wife could go along, she’s as much as part of HeroPress as I am.

So now we’re off to Vienna, Austria for WordCamp! We’re super excited.  We’re flying Austrian Airlines, which doesn’t fly out of Grand Rapids, so today we drove to Chicago. Just as we were getting to the hotel in Chicago the battery light came on in the Jeep, and it started sounding odd.  When we got there and I opened the hood the alternator was smoking.

I called AAA, and for $40 I was able to get us to a cooler insurance plan that covers towing the Jeep all the way back to Grand Rapids at no extra cost. Now we’re looking at taking the train back to Grand Rapids once we get home from Austria.

Our trip will quite literally involve planes, trains, and automobiles.

by topher at June 21, 2016 12:58 AM

A new kind of post

I’m sure both of my regular readers have noticed a recent flurry of posts that are simply photos, and mostly flowers at that.  I’ve always enjoyed taking macro shots of flowers, and I’ve always wanted to post them easily to my blog rather than to some service. I finally spent the time to figure out the workflow for the WordPress android app, and now you’re getting a lot more photos.

I wish I felt the urge to blog more, there are lots of things I’d love to have logged here, but my heart just isn’t in it.  So except for some posts here in the next few days it’ll probably just be lots of photos for a while.

I hope you enjoy it.

by topher at June 21, 2016 12:43 AM

June 18, 2016

As it were ...

Wild Roses

image

Seen in the wilds of the target parking lot.

by topher at June 18, 2016 07:38 PM

June 16, 2016

As it were ...

Fairy umbrella

image

Saw several of these in the flower bed this morning after a rain.

by topher at June 16, 2016 02:57 PM

June 11, 2016

As it were ...

Purple spikes

image

There are tiny spider webs on this one too.

by topher at June 11, 2016 06:59 PM

June 10, 2016

As it were ...

June 06, 2016

As it were ...

Purple haze

White flower with three petals, purple fuzz in the center, and 4 little stalks in the center.

I love the purple fuzz in the middle of this flower.

by topher at June 06, 2016 05:56 PM

Purple Flower

I need to do a better job getting the names of flowers.

I need to do a better job getting the names of flowers.

by topher at June 06, 2016 02:45 PM

March 31, 2016

As it were ...

A day at Meijer Gardens

The other day my sister came to visit and brought her two daughters and our niece. We got together with my friend John and his wife and kids and went to Meijer Gardens. A good time was had by all, here are some pictures.

Common Butterwort, green spike with red fringes Sammy, looks like Linus Van Pelt North American Pitcher Plant statue of baboon baby. Unknown red flowers Monarch Butterfly chrysalis, green with gold flecks and black trim Pink flowers on arch Giant moth, bigger than my niece's hand Statue of little girl watering plants Two butterflies touching, close up Brown flowers from last fall Butterfly close up View upward from the floor of the glass house, plants everywhere. Da Vinci's Horse, a man's head goes up to the horse's knee Da Vinci's Horse up close A little stream in the Japanese garden Tulips! Chihuly sculpture Cartoonish statue titled Mad Mom Clouds in a wave formation

Also some video.

by topher at March 31, 2016 05:20 PM

March 27, 2016

As it were ...

Louis L’Amour

When I was in high school a friend suggested I try reading a Louis L’Amour book, and I didn’t enjoy it. A small group of people were struggling across the desert and then they reached the ocean and it was done. Bleh.

About a year later I was camping with a friend and he brought The Last of the Breed by L’Amour, and I read the first few pages one afternoon. That night after he hit the sack I read the next 200 pages. He let me borrow it and my whole family read it, and we read it to pieces. We bought him a nice hardcover copy in thanks. That’s when I fell in love with Louis L’Amour books.

Over the next 3 years or so I read everything he ever wrote. He had a contract to write 3 books per year, which is kind of crazy. Most of them were about 120-150 pages each, mostly about the American West in the 1800’s. This is what gave him a reputation as a western novelist. He had well over 100 novels released in his lifetime.

While writing all those short novels though, he was working on his long form novels, and those were my favorite. Only a couple were about the American West. The Last of the Breed is set in the 1980’s in Siberia. The Walking Drum is set in the 1100’s over most of Europe. Jubal Sackett is set in America in the early 1600’s and ranges from the Eas Coast to the other side of the Mississippi.

He also did an excellent series about the Sackett family. It starts in England in 1599 and over the next couple books moves to the East Coast of America, and even dips down into the Caribbean a bit. It swings through the 1700’s once, and then follows several brothers across the American West in the 1800’s. Most of these are pretty short, but a few are longer, and Jubal Sackett is very long.

Someone asked me for some favorites recently, and I’d like to say that I like them all because he’s an excellent writer. I’ve read other Western Novelists and found that I don’t really prefer the genre. L’Amour is just excellent. But here’s a list.

The Walking Drum is by far my favorite.

The Last of the Breed by far my second favorite, I did a review once long ago.

The Lonesome Gods is set in the American West, but it’s long form, and tons more history and politics of early California in it. Did you know Las Vegas was Yerba Buena first? Good herb sounds like a great name for a city in California.

I love the entire Sackett series, it’s just great.

West From Singapore, Yondering, and Beyond the Great Snow Mountains are all collections of short stories set in the early 20th century, in the South Pacific, and both Eastern and Western Asia . Many of these were inspired by L’Amour’s time in those locations at that time. There’s a strong feeling of Indiana Jones in here, and he wrote them decades before Indy came on the scene.

Sitka is set in Alaska, so it has that frontier feel, but it’s a different location.

The Ferguson Rifle is about the first quick-load rifle, and the impact it had on the West.

I’ll let it go at that, but keep in mind that just about everything he wrote is great. Wikipedia has a nice book list, look at the series, those are always a little better because he plans well.

by topher at March 27, 2016 08:13 PM

March 07, 2016

OpenGroupware (Legacy and Coils)

Task Retention / Auto-Archiving

The expectation is that users creating tasks will archive those tasks when then are either completed or rejected; this is the completion of the task work-flow. However it may be advantageous, at least for certain kinds of tasks, to ensure that tasks archive at some point even if the owner chooses to ignore them [archiving a task removes it from the executant's task list]. To facilitate auto-archival the configuration document named TaskRetention.yaml exists in project 7,000. For administrators this document should be available via WebDAV in the /dav/Administration folder.

The task retention document is a YAML dictionary relating task kind values to data retention rules. The key of the dictionary is a case-sensitive task kind string; the string generic corresponds to all tasks having a NULL kind. The value for each key is a dictionary supporting the following keys: - autoArchiveThreshold – The number of days, expressed as an integer, before the action will automatically archive the task from an rejected or completed state.

Service_Laptop_Update: {'autoArchiveThreshold': 3, }
TQIRTS.ISSUE:SERVICE_DIAGNOSTIC_ASSISTANCE: {'autoArchiveThreshold': 30, }
PQIRTS.ISSUE:CP_ERROR: {'autoArchiveThreshold': 14, }
PQIRTS.ISSUE:DEFECTIVE_PART: {'autoArchiveThreshold': 14, }
PQIRTS.ISSUE:BAD_CROSS: {'autoArchiveThreshold': 14, }
WEBSITE_ENH: {'autoArchiveThreshold': 60, }
PARTS.XREFR.ADD: {'autoArchiveThreshold': 14, }
generic: {'autoArchiveThreshold': 90, }

Text 1: Example TaskRetention.yaml document. In this document the generic kind establishes a 90 day rule for automatically archiving tasks with a NULL kind. Other key values relate to tasks of specific kinds.

The values defined in this configuration document are applied to the task database using the archiveOldTasksAction workflow action. If a workflow route declaring a archiveOldTasksAction is not performed the values defined in this document will have no effect. The expectation is that a route declaring this action will be created and scheduled to be performed at some regular interval. Sites generating more tasks are encouraged to perform the auto-archiving work-flow more frequently than sites generating few tasks.

Author: Adam Tauno Williams

by whitemice at March 07, 2016 02:18 PM

March 01, 2016

OpenGroupware (Legacy and Coils)

New Feature: PDF Scrubbrush

While PDF is intended to provide an entirely portable mechanism for exchange of non-trvial documents. However, in practice documents created by the real-world variety of clients almost inevitably contain deviations from the PDF standard which create issues when the document is processed by other applications and platforms. To ensure maximum compatibility the scrub brush feature re-compiles documents on the server using the using the Poppler libraries.

[workflow tmp]# pdffonts prescrubbed-document.pdf 
name                                 type              encoding         emb sub uni object ID
------------------------------------ ----------------- ---------------- --- --- --- ---------
CCQPLO+Arial                         CID TrueType      Identity-H       yes yes yes     18  0
ArialMT                              TrueType          WinAnsi          no  no  no      26  0
Arial-BoldMT                         TrueType          WinAnsi          no  no  no      28  0
QVDGUR+MinionPro-Regular             CID Type 0C       Identity-H       yes yes yes     30  0
FZPPKI+MinionPro-Regular             CID Type 0C       Identity-H       yes yes yes     38  0
Helvetica-Bold                       Type 1            Custom           no  no  no      52  0
Helvetica                            Type 1            Custom           no  no  no      58  0
ZapfDingbats                         Type 1            ZapfDingbats     no  no  no     188  0

Text 1: The pdffonts report of a document which references non-standard fonts but does not contain the fonts. This document is unlikely to render correctly by clients on platforms other than that which created it.

The most common defect is that the PDF references non-standard fonts which are also no embedded into the PDF document – such fonts will either not display when viewed on other clients or may be replaced, often unsuccessfully, based on the viewers font substitution tables. PDF documents can be examined using the pdffonts tool provided by the Poppler project.

[workflow tmp]# pdffonts scrubbed-document.pdf 
name                                 type              encoding         emb sub uni object ID
------------------------------------ ----------------- ---------------- --- --- --- ---------
KOFYDJ+LiberationSans                TrueType          WinAnsi          yes yes yes      5  0
RXEXJO+LiberationSans-Bold           TrueType          WinAnsi          yes yes yes      6  0
IOKNUX+Arial                         TrueType          WinAnsi          yes yes yes      7  0
AADISD+MinionPro-Regular             CID Type 0C       Identity-H       yes yes yes     10  0
AGFLBT+MinionPro-Regular             CID Type 0C       Identity-H       yes yes yes     11  0
THKLNC+NimbusSanL-Bold               Type 1            WinAnsi          yes yes yes     12  0
CIXCHU+NimbusSanL-Regu               Type 1            WinAnsi          yes yes yes     13  0
QAOVNF+Dingbats                      Type 1            Builtin          yes yes yes     14  0

Text 2: The same document as previously after being processed by the scrubbrush; fonts have been substituted based on Poppler's font substitution tables, and those fonts are now all embedded in the document. This document should render consistently regardless of client application or platform.

The scrub brush feature is available in the following workflow actions:

  • searchDocumentsToZIPFileAction
  • folderToZipFileAction

In the future the scrub brush feature will be made available in the messageToINBOXAction and documentToMessageAction workflow actions.

Author: Adam Tauno Williams

by whitemice at March 01, 2016 02:11 PM

January 27, 2016

As it were ...

Saturn Run

Saturn Run Book Cover Saturn Run
John Sandford, Ctein,
Fiction
Putnam Juvenile
October 6, 2015
496

I’ve been in a sci-fi book club now for 5 years, and I don’t think I’ve ever reviewed a single book we’ve read.  I’ll try to rectify that, but we’ll see.  Here’s the start.

Saturn Run is sort of a first contact story. It’s set in 2066 and the years after, and someone notices an interstellar ship dock near Saturn and then leave. The US and China begin a space race to see who can get people out there first.

The book is heavy on hard science.  There’s even an essay at the end about the science, and which parts were totally bogus (only one part really). I liked that, it put me in mind of The Martian.

The majority of the book is about the travel from Earth to Saturn, and mostly from the US ship’s viewpoint. Near the end we get some good character development on the Chinese side, and some good interaction with what they find orbiting Saturn, which I am NOT going to tell you about.

The only thing I really didn’t like was that there were a few loose ends that weren’t REALLY tied up. The tying was sort of an offhanded thing and left as many questions as answers.  On the other hand, there was one loose end I never expected to be tied up that WAS, and I thought it was very classy.

On the whole I highly recommend this book, especially if you like hard science sci-fi.

by topher at January 27, 2016 03:46 PM

November 23, 2015

As it were ...

A Year of HeroPress

It was one year ago on 21 November that my boss emailed me and told me it was time to do something different.  “I want you to do something special for WordPress” he said.  I knew right then that life would never be the same, and I was right.

I didn’t know it for a couple months, but that’s where HeroPress was really started.  In the last year I’ve had some amazing experiences, some hard times, and felt wonderful support from my family, friends, and people I’d never met before.

I’ve been to India and met literally hundreds of new people, many of whom are now dear friends.

I’ve failed and I’ve succeeded.

I have a completely new job doing something I really enjoy, but has nothing to do with anything I was doing at the start of the year.

Now I’m winding up 2015 with a sense of peace and accomplishment.  I’m proud of HeroPress and happy with where it’s going.

A giant thank you to everyone who’s been involved: my family, Dave Rosen, everyone at Pressnomics last year, everyone who commented on WPTavern about HeroPress, and everyone who’s contributed an essay.  Relatively speaking, HeroPress is made up far more of all of you than it is of me.

Thank you.

by topher at November 23, 2015 09:12 PM

August 28, 2015

Ben Rousch's Cluster of Bleep

Kivy – Interactive Applications and Games in Python, 2nd Edition Review

I was recently asked by the author to review the second edition of “Kivy – Interactive Applications in Python” from Packt Publishing. I had difficulty recommending the first edition mostly due to the atrocious editing – or lack thereof – that it had suffered. It really reflected badly on Packt, and since it was the only Kivy book available, I did not want that same inattention to quality to reflect on Kivy. Packt gave me a free ebook copy of this book in exchange for agreeing to do this review.

At any rate, the second edition is much improved over the first. Although a couple of glaring issues remain, it looks like it has been visited by at least one native English speaking editor. The Kivy content is good, and I can now recommend it for folks who know Python and want to get started with Kivy. The following is the review I posted to Amazon:

This second edition of “Kivy – Interactive Applications and Games in Python” is much improved from the first edition. The atrocious grammar throughout the first edition book has mostly been fixed, although it’s still worse than what I expect from a professionally edited book. The new chapters showcase current Kivy features while reiterating how to build a basic Kivy app, and the book covers an impressive amount material in its nearly 185 pages. I think this is due largely to the efficiency and power of coding in Python and Kivy, but also to the carefully-chosen projects the author selected for his readers to create. Despite several indentation issues in the example code and the many grammar issues typical of Packt’s books, I can now recommend this book for intermediate to experienced Python programmers who are looking to get started with Kivy.

Chapter one is a good, quick introduction to a minimal Kivy app, layouts, widgets, and their properties.

Chapter two is an excellent introduction and exploration of basic canvas features and usage. This is often a difficult concept for beginners to understand, and this chapter handles it well.

Chapter three covers events and binding of events, but is much denser and difficult to grok than chapter two. It will likely require multiple reads of the chapter to get a good understanding of the topic, but if you’re persistent, everything you need is there.

Chapter four contains a hodge-podge of Kivy user interface features. Screens and scatters are covered well, but gestures still feel like magic. I have yet to find a good in-depth explanation of gestures in Kivy, so this does not come as a surprise. Behaviors is a new feature in Kivy and a new section in this second edition of the book. Changing default styles is also covered in this chapter. The author does not talk about providing a custom atlas for styling, but presents an alternative method for theming involving Factories.

In chapter six the author does a good job of covering animations, and introduces sounds, the clock, and atlases. He brings these pieces together to build a version of Space Invaders, in about 500 lines of Python and KV. It ends up a bit code-dense, but the result is a fun game and a concise code base to play around with.

In chapter seven the author builds a TED video player including subtitles and an Android actionbar. There is perhaps too much attention paid to the VideoPlayer widget, but the resulting application is a useful base for creating other video applications.

by brousch at August 28, 2015 01:16 AM