Planet GRLUG

October 21, 2019

Whitemice Consulting

PostgreSQL: "UNIX Time" To Date

In some effort to avoid time-zone drama, or perhaps due to fantasies of efficiency, some developer put a date-time field in a PostgreSQL database as an integer; specifically as a UNIX Time value. ¯\_(ツ)_/¯

How to present this as a normal date in a query result?

date_trunc('day', (TIMESTAMP 'epoch' + (j.last_modified * INTERVAL '1 second'))) AS last_action,

This is the start of the epoch plus the value in seconds - UNIX Time - calculated and cast as a non-localized year-month-day value.

Clarification#1: j is the alias of the table in the statement's FROM.

Clarification#2: last_modified is the field which is an integer time value.

by whitemice at October 21, 2019 01:36 PM

October 19, 2019

As it were ...

When It’s Not Imposter Syndrome

I remember the first time I heard about imposter syndrome. It was from Chris Lema. At the time I thought “Huh, that makes sense that people would struggle with that”.

Ironically, I don’t think I’ve ever really struggled with imposter syndrome. I’ve always had a pretty good handle on what I’m good at, and what I can do. When I compare myself with others it’s usually either to figure out how to boost myself to their level, or to simply admire the fact that they’re at a level I’ll never reach. I can be content with that.

There’s always been a thought in the back of my head though, “What if it’s NOT imposter syndrome? What if some people really aren’t Good Enough for the task at hand, and have been lucky? Am I there?”

For a long time I wasn’t there. I was Good Enough for the task at hand. I could objectively say “They asked me to do X, I know how to do X, so I’ll go do it”.

But then one day I wasn’t.

I took a job with a big agency that does some big, hard projects. I knew how to do much of what they asked, and I assumed I’d learn how to do what I didn’t know. As it turned out, they hired me expecting me to already know those things. This was a mistake on both our parts.

If we had had lots of time, and money were not an issue, they could have taught me what I needed to know. But the reason they hired me was to get a specific job done. For that job I wasn’t Good Enough.

This wasn’t immediately apparent. Everyone takes a little time to get into the swing of things at a new job. It’s when that swing doesn’t happen that you start to wonder. It was in my third month on the job that I think everyone realized I wasn’t Good Enough for this job. I wasn’t happy with the work I was doing, and I wasn’t happy with how often I had to ask someone else to stop what they were doing to do what I was supposed to do because I didn’t know how.

My supervisor was sympathetic, but had to do his job, which was properly staff his team. He gave me 2 months to figure things out, but also told me early enough that I could look for a job at WCUS.

In the end I was at the company for only 5 months. They gave me a small severance package, which was very kind of them, they didn’t need to.

I’ve thought a lot about this experience over the last few years. Ironically it still didn’t give me imposter syndrome. I still knew what I was good at, and now I knew something I wasn’t good at.

It’s really really important to remember that just because you’re not good enough for a specific task doesn’t mean you’re not Good Enough. Just because you don’t know something doesn’t mean you’re dumb. Take your experience and learn from it.

Mine started me down the road toward not being a professional web developer anymore. I’ll never stop BEING a web developer, just like a plumber doesn’t stop knowing how to fix a pipe when he retires. But day to day he’s doing something else.

Now I’m doing something else. Something I’m actually better at than web development, and it brings me joy and provides for my family.

I want to summarize by saying that if what you’re doing doesn’t feel right you should think hard about it. Some people will tell you “Oh that’s just imposter syndrome”. And they might very well be right. But look deeply anyway. Find your OWN path.

As a person you’re ALWAYS Good Enough. Whether you’re prepared for the task at hand is another thing entirely.

The post When It’s Not Imposter Syndrome appeared first on As it were....

by topher at October 19, 2019 11:11 PM

October 13, 2019

As it were ...

Time To Change My Eating

I’ve put on weight every month for the last 36 months or so. Every month I’ve weighed the most I’ve ever weighed. I’m getting tired of it. This last Monday I started a “diet”. It’s loosely called intermittent fasting, though that term means different things for different people.  For me it means I only eat during a 4 hour period in a day.  That’s a 20 hour fast. I chose to eat only between 4pm and 8pm.  Practically speaking I generally don’t eat until 5 or 5:30 when my family eats.  I also find myself cheating a little in the evening, and snacking between 8 and 9. But if I start late, I don’t mind ending late.

I haven’t had real hunger during the day yet, it’s been really really comfortable.

I whipped up a WordPress plugin to help me track how I’m doing.  This chart will be updated every day:

So far I’m losing a pound a day. I have no idea how long this will go on so easily, we’ll see. At this rate I should be down 25lbs by WCUS. We’ll see if anyone notices.

The post Time To Change My Eating appeared first on As it were....

by topher at October 13, 2019 11:10 PM

September 24, 2019

As it were ...

Living Life With Tourette Syndrome

I was 47 years old when I learned I’d had Tourette Syndrome ever since I was about 10 years old.

I’d heard of it of course.  It’s that weird disease that makes you yell swear words at inappropriate times, right? Well, it’s not a disease, and only about 1% of people who display symptoms have the swearing symptom.

How did I find out? I randomly watched a video on YouTube of a comedian who plays off his Tourette’s for his comedy. His name is Samuel J. Comroe, and the longer I watched the more I heard about my own life. Check it out, it’s REALLY funny.

The most common Tourette’s symptoms are tics. It’s like a twitch, but twitches are usually one-offs, single or few instances.  A tic can be something as benign as sniffing a couple times per minute. Or a light cough. My first memory of anything is from when I was about 10 and my mom said one day “What’s with the cough-sniff?” and I said “What are you talking about?”.

She said “Every couple minutes you cough and then sniff”. I said “No I don’t, why would I do that?”. But then I started noticing she was right. There are two kinds of tics in Tourette’s, auditory and muscular. The famous swearing symptom is auditory, but it can be anything.  My first tic combined two auditory tics, and I’ve never had another.

There’s another taxonomy of tics that contains transitory and chronic tics. Mine have been exclusively transitory, though I have one now that I’ve had for years, and I wonder if it will stay. Transitory tics last a few days, weeks, or months, and then fade away. They rarely return, but I’m quite careful not to do them on purpose just to see.

My first really noticeable tic started while I was at camp one summer, so it was a surprise for my family when I came home. You know how you can move your jaw side to side a little bit, and flex the joint, and maybe even pop it like cracking a knuckle?  I started doing that, except also flexing the muscles on my cheek. But only on one side of my face. I sat at dinner the first night home and my dad said “Why are you doing that?!?! You look retarded!”

I need to point our here that my dad was rarely that callous when I was a kid, and I had a good enough relationship with him that I was able to say “I can’t help it, back off!” and he did and I wasn’t scarred by it.

I’d also like to point out here that my dad was a paramedic instructor and my mom was a Registered Nurse, and it never occurred to either one of them in my whole life that I might have an actual neurological disorder to explain this stuff.  My family just said I lived in the Twitchy Zone.  They all came to accept that I had tics.

Over the years I’ve had the ever common shoulder roll a couple times.  We’ve all seen baseball players do it as they come to the mound. I’d just do it every 45 seconds or so for 6 months. (Note, as far as I know, all of my tics have gone to sleep with me at night, I don’t have any tics while sleeping.)  One time after I graduated from college I noticed my forehead muscles ached. Then I realized I had been flexing them every 30 seconds or so for days.  That one lasted just a few weeks. My roommate hated it, he couldn’t understand why  kept doing that when I looked at him.

It’s really hard to cuddle up with my wife and sit still to watch a movie or fireworks or anything. My current tics are small, but she can feel every one of them and it’s really uncomfortable.

My current tics:

  • I move my fingers against each other so they rub, kind of like scratching a slight itch. Many people do this, so unless you watch me long term it’s really hard to notice.
  • I flex the muscles around my ears, forcing my ears back away from my eyes, which pulls my glasses up. Again, glasses wearers will tell you we all do this, but the movement is SO tiny that people don’t usually notice. I just do it every few minutes.
  • My left bicep has had a light tic for a couple years now. It just barely flexes for about a quarter second. Most people don’t notice, but a few people have asked me about it.  I suspect far more notice that say anything. But even that is a small motion, so unless you’re in a conversation with me, or watching closely, you won’t notice.

I’ve always wanted a tic that made my abs flex spontaneously every few seconds, so I could get a free sixpack. Alas.

Tics are often called involuntary, but they’re actually unvoluntary. This means that I can stop a tic any time I want just by thinking about, but the longer I don’t do it, the more mental focus it requires to keep it from happening. After a few minutes, 100% of my focus is on making it not happen, and as soon as I think away, it happens again.

After watching Comroe’s video I just sat in silence for a while, thinking about all the tics over all the years.  I started reading about Tourette’s and found that I fit the symptom profile perfectly for all age groups. Kids are more likely to be vocal.  It’s worse in the teen years (because who doesn’t need to look different as a teen?). It gets less pronounced in the adult years.

I read about other common symptoms, and was astonished to discover I have most of the symptoms of ADHD. Again, all I knew was ADHD was “hyper” and I was never hyper. But boy do I have the actual symptoms. OCD is another common co-symptom, and while mine is pretty specific, I absolutely have it in some places.

There’s isn’t really a treatment. Symptoms are rarely bad enough to change ones capabilities in life. If they make you look or act unusual then you have to get around that, but it’s really not that bad for most people. For a few the tics can be very dramatic, like throwing oneself on the ground, or swinging arms in a wide arc.  Even for those folks the treatment is usually based around hypnosis or something.  Remember the focus thing? That can be exercised and enhanced if you really need to, and it can help a lot of people.

I have been extraordinarily blessed in my life that no-one has ever made me feel bad or teased me about any of this. Kids can be amazingly cruel, and I never got any of that.

It’s really hard to describe how life changing it has been to know what’s been going on all these years.  It’s even weird to say, because my life hasn’t changed. Nothing is any different. But now I know why I was different from the other kids. Why my body does this stuff that I can’t seem to control. There’s a reason, I’m not just randomly out of control of my own body.

I wrote this post so that maybe someone else like me will find it and come to the same understanding. I also hope it’ll help YOU, dear reader, understand what Tourette’s is, and perhaps spread that understanding, so that fewer people make to their fifties before knowing what they’re dealing with.

Here’s some reading material on Tourette Syndrome:

The post Living Life With Tourette Syndrome appeared first on As it were....

by topher at September 24, 2019 03:33 AM

September 11, 2019

Whitemice Consulting

PostgreSQL: Casted Indexes

Dates in databases are a tedious thing. Sometimes a time value is recorded as a timestamp, at other times - probably in most cases - it is recorded as a date. Yet it can be useful to perform date-time queries using a representation of time distinct from what is recorded in the table. For example a database which records timestamps, but I want to look-up records by date.

To this end PostgreSQL supports indexing a table by a cast of a field.

Create A Sample

testing=> CREATE TABLE tstest (id int, ts timestamp);
CREATE TABLE
testing=> INSERT INTO TABLE tstest (1,'2018-09-01 12:30:16');
INSERT 0 1
testing=> INSERT INTO TABLE tstest (1,'2019-09-02 10:30:17');
INSERT 0 1

Create The Index

Now we can use the "::" operator to create an index on the ts field, but as a date rather than a timestamp.

testing=> create index tstest_tstodate on dtest ((ts::date));
CREATE INDEX

Testing

Now, will the database use this index? Yes, provided we cast ts as we do in the index.

testing=>SET ENABLE_SEQSCAN=off;
SET
testing=> EXPLAIN SELECT * FROM tstest WHERE ts::date='2019-09-02';
                                 QUERY PLAN                                  
-----------------------------------------------------------------------------
 Index Scan using tsest_tstodate on tstest  (cost=0.13..8.14 rows=1 width=12)
   Index Cond: ((ts)::date = '2019-09-02'::date)
(2 rows)

For demonstration it is necessary to disable sequential scanning, ENABLE_SEQSCAN=off, otherwise with a table this small the PostgreSQL will never use any index.

Casting values in an index can be a significant performance win when you frequently query data in a form differing than its recorded form.

Tags: 

by whitemice at September 11, 2019 03:09 PM

August 30, 2019

Whitemice Consulting

Listing Printer/Device Assignments

The assignment of print queues to device URIs can be listed from a CUPS server using the "-v" option.

The following authenticates to the CUPS server cups.example.com as user adam and lists the queue and device URI relationships.

[user@host ~]# lpstat -U adam -h cups.example.com:631 -v | more
device for brtlm1: lpd://cismfp1.example.com/lp
device for brtlp1: socket://lpd02914.example.com:9100
device for brtlp2: socket://LPD02369.example.com:9100
device for brtmfp1: lpd://brtmfp1.example.com/lp
device for btcmfp1: lpd://btcmfp1.example.com/lp
device for cenlm1: lpd://LPD04717.example.com/lp
device for cenlp: socket://LPD02697.example.com:9100
device for cenmfp1: ipp://cenmfp1.example.com/ipp/
device for ogo_cs_sales_invoices: cups-to-ogo://attachfs/399999909/${guid}.pdf?mode=file&pa.cupsJobId=${id}&pa.cupsJobUser=${user}&pa.cupsJobTitle=${title}
device for pdf: ipp-to-pdf://smtp
...

by whitemice at August 30, 2019 07:36 PM

Reprinting Completed Jobs

Listing completed jobs

By default the lpstat command lists the queued/pending jobs on a print queue. However the completed jobs still present on the server can be listed using the "-W completed" option.

For example, to list the completed jobs on the local print server for the queue named "examplep":

[user@host] lpstat -H localhost -W completed examplep
examplep-8821248         ogo             249856   Fri 30 Aug 2019 02:17:14 PM EDT
examplep-8821289         ogo             251904   Fri 30 Aug 2019 02:28:04 PM EDT
examplep-8821290         ogo             253952   Fri 30 Aug 2019 02:28:08 PM EDT
examplep-8821321         ogo             249856   Fri 30 Aug 2019 02:34:48 PM EDT
examplep-8821333         ogo             222208   Fri 30 Aug 2019 02:38:16 PM EDT
examplep-8821337         ogo             249856   Fri 30 Aug 2019 02:38:50 PM EDT
examplep-8821343         ogo             249856   Fri 30 Aug 2019 02:39:31 PM EDT
examplep-8821351         ogo             248832   Fri 30 Aug 2019 02:41:46 PM EDT
examplep-8821465         smagee            1024   Fri 30 Aug 2019 03:06:54 PM EDT
examplep-8821477         smagee          154624   Fri 30 Aug 2019 03:09:38 PM EDT
examplep-8821493         smagee          149504   Fri 30 Aug 2019 03:12:09 PM EDT
examplep-8821505         smagee           27648   Fri 30 Aug 2019 03:12:36 PM EDT
examplep-8821507         ogo             256000   Fri 30 Aug 2019 03:13:26 PM EDT
examplep-8821562         ogo             251904   Fri 30 Aug 2019 03:23:14 PM EDT

Reprinting a completed job

Once the job id is known, the far left column of the the lpstat output, the job can be resubmitted using the lp command.

To reprint the job with the id of "examplep-8821343", simply:

[user@host] lp -i examplep-8821343 -H restart

by whitemice at August 30, 2019 07:29 PM

Create & Deleting CUPs Queues via CLI

Create A Print Queue

[root@host ~]# /usr/sbin/lpadmin -U adam -h cups.example.com:631 -p examplelm1 -E \
  -m "foomatic:HP-LaserJet-laserjet.ppd" -D "Example Pick Ticket Printer"\
   -L "Grand Rapids" -E -v lpd://printer.example.com/lp

This will create a queue named examplelm1 on the host cups.example.com as user adam.

  • "-D" and "-L" specify the printer's description and location, respectively.
  • The "-E" option, which must occur after the "-h" and -p" options instructs CUPS to immediately set the new print queue to enabled and accepting jobs.
  • "-v" option specifies the device URI used to communicate with the actual printer.

The printer driver file "foomatic:HP-LaserJet-laserjet.ppd" must be a PPD file available to the print server. PPD files installed on the server can be listed using the "lpinfo -m" command:

[root@crew ~]# lpinfo -m | more
foomatic:Alps-MD-1000-md2k.ppd Alps MD-1000 Foomatic/md2k
foomatic:Alps-MD-1000-ppmtomd.ppd Alps MD-1000 Foomatic/ppmtomd
foomatic:Alps-MD-1300-md1xMono.ppd Alps MD-1300 Foomatic/md1xMono
foomatic:Alps-MD-1300-md2k.ppd Alps MD-1300 Foomatic/md2k
foomatic:Alps-MD-1300-ppmtomd.ppd Alps MD-1300 Foomatic/ppmtomd
...

The existence of the new printer can be verified by checking its status:

[root@host ~]# lpq -Pexamplelm1
examplelm1 is ready
no entries

The "-l" options of the lpstat command can be used to interrogate the details of the queue:

[root@host ~]# lpstat -l -pexamplelm1
printer examplelm1 is idle.  enabled since Fri 30 Aug 2019 02:56:11 PM EDT
    Form mounted:
    Content types: any
    Printer types: unknown
    Description: Example Pick Ticket Printer
    Alerts: none
    Location: Grand Rapids
    Connection: direct
    Interface: /etc/cups/ppd/examplelm1.ppd
    On fault: no alert
    After fault: continue
    Users allowed:
        (all)
    Forms allowed:
        (none)
    Banner required
    Charset sets:
        (none)
    Default pitch:
    Default page size:
    Default port settings:

Delete A Print Queue

A print queue can also be deleted using the same lpadmin command used to create the queue.

[root@host ~]# /usr/sbin/lpadmi -U adam -h cups.example.com:631  -x examplelm1
Password for adam on crew.mormail.com? 
lpadmin: The printer or class was not found.
[root@host ~]# lpq -Pexamplelm1
lpq: Unknown destination "examplelm1"!

Note that deleting the print queue appears to fail; only because the lpadmin command attempts to report the status of the named queue after the operation.

by whitemice at August 30, 2019 07:11 PM

July 25, 2019

Whitemice Consulting

Changing Domain Password

Uh oh, Active Directory password is going to expire!

Ugh, do I need to log into a Windows workstation to change by password?

Nope, it is as easy as:

awilliam@beast01:~> smbpasswd -U DOMAIN/adam  -r example.com
Old SMB password:
New SMB password:
Retype new SMB password:
Password changed for user adam

In this case DOMAIN is the NetBIOS domain name and example.com is the domain's DNS domain. One could also specify a domain controller for -r, however in most cases the bare base domain of an Active Directory backed network will resolve to the active collection of domain controllers.

by whitemice at July 25, 2019 03:29 PM

July 08, 2019

As it were ...

What To Expect When Giving Your First WordCamp Talk

I recently convinced a Very Smart Woman to give her first WordCamp talk. What’s a little unique about this circumstance is that it’s also the first WordCamp she’s ever attended. This means she has no frame of reference for what to expect, so she asked me a bunch of questions. It thought it could be useful to grab that perspective and speak to it for posterity. So here is a random collection of things you should know about your first WordCamp talk.

  • There will be a projector you can plug your computer into and it will display on a big screen.
  • You can use any software you want to make your presentation. Keynote, Google slides, PowerPoint, whatever.
  • You don’t have to have a digital component to your talk if you don’t want to.
  • The projector could have a connection type your laptop doesn’t support. A brand new mac supports only USB-C. Many older projectors have only VGA or HDMI. I recommend investing in a converter that fits your laptop.  So if your laptop does DisplayPort and the projector is HDMI, you might want a converter like this.  That said, the majority of the time there’s a converter onsite for you to borrow, whether from the venue or another speaker. Don’t count on it, buy converters when you can afford them, but don’t avoid speaking just because you don’t have a converter.
  • There will be someone within your view that will hold signs up when you’re near the end of your speaking time. You’ll see a 10 minute sign and a 5 minute sign.  Sometimes this person will also introduce you at the beginning of your talk, sometimes not. It can always be your choice though.
  • You should end your talk about 10 minutes before the deadline so there’s time for questions.
  • If you don’t have time for all the questions, announce that you’ll be at the Happiness Bar right after your talk. The Happiness Bar is a place for people to get help and ask questions.  You can hang out there for however long you want answering questions.
  • People are encouraged to walk out of a talk if they discover it’s not suited to them. Don’t take this personally. If your talk really isn’t for them then they need to not waste that time wishing they were in another talk.
  • If someone asks a “question” that’s “more of a comment really”, feel free to interrupt and tell them this is a time for questions, and they could meet you at the happiness bar later if they want. This is YOUR talk, don’t let someone hijack it and make it into what they think it should be. The same holds true of anyone taking control from you. Be strong. YOU are the expert at the front of the room.
  • At some point in your speaking career someone is going to attend your talk that you think is WAY smarter/more knowledgeable/better coder than you or whatever. Don’t worry about it. They’ll still learn something from you, I promise. They attended because they want to hear what you have to say.
  • There’s often a speaker/sponsor dinner/soiree the night before WordCamp. This is usually similar to the after party, but with FAR fewer people. I strongly recommend you attend. They have experience to share, and soon you will too.
  • Speakers usually get a free ticket to WordCamp, so I recommend not buying one until after you find out if you’ve been accepted.

I can’t think of more right now, but I’m sure there are many. Please leave extra tips in the comments below.

The post What To Expect When Giving Your First WordCamp Talk appeared first on As it were....

by topher at July 08, 2019 02:00 PM

June 06, 2019

As it were ...

WordCamp Detroit 2019

On May 18th Cate and I went to WordCamp Detroit. We both spoke. I talked about trends in ecommerce, and Cate talked about Working in WordPress.  It was a small, one-day event, but it was quite fun, and we got to see some unexpected friends.

The post WordCamp Detroit 2019 appeared first on As it were....

by topher at June 06, 2019 09:13 PM

May 24, 2019

Whitemice Consulting

CRON Jobs Fail To Run w/PAM Error

Added a cron job to a service account's crontab using the standard crontab -e -u ogo command. This server has been chugging away for more than a year, with lots of stuff running within he service account - but nothing via cron.

Subsequently the cron jobs didn't run. :( The error logged in /var/log/cron was:

May 24 14:45:01 purple crond[18909]: (ogo) PAM ERROR (Authentication service cannot retrieve authentication info)

The issue turned out to be that the service account - which is a local account, not something from AD, LDAP, etc... - did not have a corresponding entry in /etc/shaddow. This breaks CentOS7's default PAM stack (specified in /etc/pam.d/crond). The handy utility pwck will fix this issue, after which I the jobs ran without error.

[root@purple ~]# pwck
add user 'ogo' in /etc/shadow? y
pwck: the files have been updated
[root@purple ~]# grep ogo /etc/shadow
ogo:x:18040:0:99999:7:::

by whitemice at May 24, 2019 08:09 PM

April 24, 2019

As it were ...

April 18, 2019

Whitemice Consulting

MySQL: Reporting Size Of All Tables

This is a query to report the number of rows and the estimated size of all the tables in a MySQL database:

SELECT 
  table_name, 
  table_rows, 
  ROUND(((data_length + index_length) / 1024 / 1024), 2) AS mb_size
FROM information_schema.tables
WHERE table_schema = 'maindb;

Results look like:

table_name                                  table_rows mb_size 
------------------------------------------- ---------- ------- 
mageplaza_seodashboard_noroute_report_issue 314314     37.56   
catalog_product_entity_int                  283244     28.92   
catalog_product_entity_varchar              259073     29.84   
amconnector_product_log_details             178848     6.02    
catalog_product_entity_decimal              135936     16.02   
shipperhq_quote_package_items               115552     11.03   
amconnector_product_log                     114400     767.00  
amconnector_productinventory_log_details    114264     3.52    

This is a very useful query as the majority of MySQL applications are poorly designed; they tend not to clean up after themseves.

by whitemice at April 18, 2019 06:30 PM

April 16, 2019

As it were ...

I’m A Travelin’ Man

Starting last fall I’ve been traveling quite a bit more than usual. I had every intention of blogging it all like crazy, but blogging is like any other habit. If you don’t do it it’s not a habit.

So I’m going to make this a roundup post and try to do better in the future.

Sydney, Australia

I started my current job last summer on my birthday. Four days later I was in Sydney. It was a bit of a surprise and very exciting. We have an office there, it was the week of WordCamp, and there were two other conferences happening that week. So I went for a week and had a GREAT time. I got to know some of my new co-workers and interact with some from a previous job.

I took lots of pictures, but here are just a couple.

Omaha, Nebraska

In August Cate and I went to WordCamp Omaha.  She’s always wanted to go, and I enjoyed Oamaha the other time I’d been there, so away we went. We rented a car and drove, which was a good time all by itself. Cate spoke at the camp but I did not.

Here are a couple pictures.

Pittsburgh, PA

In September Cate and I went to WordCamp Pittsburgh. We’d never been to the city and there were some friends there we wanted to see.  We rented a car again and still enjoyed it. Again, more pics:

Philadelphia, PA

In October I went to WordCamp Philadelphia for work.  It was kind of our kickoff event, but the plugin wasn’t QUITE ready, so we just talked a lot. I really enjoyed traveling with co-workers and showing them what WordCamps are like. I got a little HeroPress love while I was there, some fans were excited to meet.

Austin, TX

Since we have an office in Austin I’ve been there four or five times in the last 10 months or so. I won’t talk a lot about it since I’ve already blogged about it.

Dallas, TX

We have a bunch of friends in Dallas as well, and made a bunch of new ones. I cam along with Cate on this one, she spoke again, to great success as always. The most impressive thing about Dallas was the free range beer.

Beer case labeled

Nashville TN, WordCamp US

This one was for work again, but Cate came again of course. Several co-workers from BigCommerce came along as well as a couple representatives from Modern Tribe. We had a great time all around.  Matt stopped by the booth and we chatted, and he even talked about how cool we are on stage at the State Of The Word.

Philly Again

In February the Philadelphia meetup folk asked me to come back and present about BigCommerce. It was a lightning trip, 24 hours on the ground, but hugely successful in my opinion.

Phoenix, AZ

Also in February Cate and I went to WordCamp Phoenix. The weather was great, about 45 degrees and a bit rainy. BigCommerce sponsored and a couple co-workers came with us. I got to speak about HeroPress, Cate was on a panel, and our friend Tracey spoke about ecommerce.

Dayton, OH

In early March Cate and I went to WordCamp Dayton. We had a really good time, but I forgot to take pictures.  🙁

Orlando and Miami, FL

In mid-March I went to Orlando to their WordPress meetup. It went well, and I was able to get some sweet Disney and Potter swag for my family. I was in Orlando for less than 24 hours, and then flew south to Miami for WordCamp. I got there a couple days early and got to sit by the pool in the sun for one whole day. Then it was back to work.  Since I was in town early I was able to help out the organizers with moving camp stuff from a living room into a big truck.

We were sponsors, so we had several people from BigCommerce.  I spoke about ecommerce once and HeroPress in a lighting talk.

Washington, D.C.

When I was done in Miami I flew directly to D.C. I got there one day early and had a chance to look around. I was there for their meetup, which went very well.

Austin and London

At the end of March I went to Austin for a few days in the office, and then flew from there directly to London England. I was there for WordCamp, but got there a week early. We have an office there, so I worked with co-workers for a couple days.  I hung out with a friend one evening and walked around town taking pictures. I spoke at WordCamp and met MANY new friends and talked with many old friends.

 

That’s it for now.  Detroit is next, with possibly Santa Clarita in there. Berlin is in June. I’ll try to do better at posting once per trip.

The post I’m A Travelin’ Man appeared first on As it were....

by topher at April 16, 2019 11:39 PM

April 09, 2019

OpenGroupware (Legacy and Coils)

Create a Workflow Process via REST (curl)

Creation of a process via an HTTP PUT is essentially the same as creation of a route via a WebDAV client as REST is a subset of WebDAV. The input message payload for the process must be PUT as an object named InputMessage in the Route's container. XATTRs (extended attributes) can be set using URL parameters; the ability to set XATTR values is an advantage REST has over most WebDAV clients.

Here is an example of create a process instance of the workflow route "V200TmpxrefrLoad" with an InputMessage from the local file "Desktop/tvh_20194.zip" and XATTRs named "update", "effective", "taskid", and "batchid".

awilliam@beast01:~> curl -v -u fred -T Desktop/tvh_20194.zip 'http://coils.example.com/dav/Workflow/Routes/V200TmpxrefrLoad/InputMessage?update=2019-04-03&effective=2019-04-03&taskid=1063257439&batchid=v200-04/19'
Enter host password for user 'fred':
* Hostname was NOT found in DNS cache
*   Trying 192.168.1.65...
* Connected to coils.example.com (192.168.1.65) port 80 (#0)
* Server auth using Basic with user 'fred'
> PUT /dav/Workflow/Routes/V200TmpxrefrLoad/InputMessage?update=2019-04-03&effective=2019-04-03&taskid=1063257439&batchid=v200-04/19 HTTP/1.1
> Authorization: Basic **************==
> User-Agent: curl/7.37.0
> Host: coils.example.com
> Accept: */*
> Content-Length: 44473641
> Expect: 100-continue
> 
< HTTP/1.1 100 Continue
* We are completely uploaded and fine
< HTTP/1.1 301 Moved
* Server nginx/1.12.2 is not blacklisted
< Server: nginx/1.12.2
< Date: Tue, 09 Apr 2019 18:13:49 GMT
< Content-Type: application/octet-stream
< Content-Length: 0
< Connection: keep-alive
< Set-Cookie: OGOCOILSESSIONID=f9c4efe4-2091-4229-8ac7-68b6fd4a8478-13bb3be8-fae8-472b-9999-514eac324614-3cf33404-c58e-4727-bac1-1754711b9344; Domain=coils.example.com; expires=Wed, 10-Apr-2019 18:13:49 UTC; Path=/
< X-COILS-WORKFLOW-OUTPUT-URL: /dav/Workflow/Routes/V200TmpxrefrLoad/1065656529/output
< X-COILS-WORKFLOW-MESSAGE-LABEL: InputMessage
< X-COILS-WORKFLOW-PROCESS-ID: 1065656529
< Location: /dav/Workflow/Routes/V200TmpxrefrLoad/1065656529/input
< X-COILS-WORKFLOW-MESSAGE-UUID: {688a86f2-3898-4d66-8c47-7393fa9fbad6}
< 
* Connection #0 to host coils.example.com left intact

Success is indicated by an HTTP/301 response. The headers in the response provide important meta-data which may be of use to the client.

Header Description
X-COILS-WORKFLOW-OUTPUT-URL The URL to watch for the process' output message.
X-COILS-WORKFLOW-MESSAGE-LABEL The label assigned to the new message; this will typically be “InputMessage”.
X-COILS-WORKFLOW-PROCESS-ID The object id of the new process entity.
X-COILS-WORKFLOW-MESSAGE-UUID The UUID of the new message.

The priority of the new process can be set to a value other than the default of 201 using the URL parameter ".priority". The value must be a permissible integer priority value. Note that this parameter has a prefix of "." in order to distiguish it from an XATTR value.

In the circumstance where the creation of the processes is quashed by run control the response will be HTTP/202. The HTTP/202 response will have a header of X-COILS-WORKFLOW-ALERT with a value of “run-control violation” and the body of the response will describe the event.

by whitemice at April 09, 2019 06:25 PM

April 08, 2019

Whitemice Consulting

Informix: Listing The Locks

The current database locks in an Informix engine are easily enumerated from the sysmaster database.

SELECT 
  TRIM(s.username) AS user, 
  TRIM(l.dbsname) AS database, 
  TRIM(l.tabname) AS table,
  TRIM(l.type) AS type,
  s.sid AS session,
  l.rowidlk AS rowid
FROM sysmaster:syslocks l
  INNER JOIN sysmaster:syssessions s ON (s.sid = l.owner)
WHERE l.dbsname NOT IN('sysmaster')
ORDER BY 1; 

The results are pretty straight forward:

User Database Type Session ID Row ID
extranet maindb site_master IS 436320|0
shuber maindb workorder IS 436353|0
shuber maindb workorder IX 436353|0
shuber maindb workorder_visit IS 436353|0
extranet maindb customer_master IS 436364|0
jkelley maindb workorder IX 436379|0
jkelley maindb workorder IS 436379|0
mwathen maindb workorder IS 436458|0
Tags: 

by whitemice at April 08, 2019 08:10 PM

September 26, 2018

As it were ...

get_options Topher Rap

My friends Kyle and Adam run a podcast together called get_options(). I am ashamed to admit I haven’t listened to any episodes (except one, you’ll see), but in my defense I don’t listen to any podcasts. I’ve been ON a few podcasts, but I didn’t even listen to those episodes.

Anyway, I was talking with Kyle recently and he said “Did you hear the rap I made for you?”.  I had not. I’ve seen Kyle and Adam rap together before, so I knew it could be done, but it never occurred to me that I would be the subject of one of these raps. Yet apparently I was.  It was in honor of me getting a new job. In episode 60 Kyle breaks out the rap.  Here’s the link to the episode, and here’s the Soundcloud clip of just the rap:

The post get_options Topher Rap appeared first on As it were....

by topher at September 26, 2018 12:08 AM

September 14, 2018

As it were ...

Austin, Texas

This summer I’ve had the pleasure of visiting Austin twice. The first trip started on my birthday in July and lasted 4 days. The second trip was in August and lasted 2 weeks. Both times were for work, and both times I stayed at “extended stay” hotels, which means they had full kitchens, and much more robust laundry utilities available.  I don’t remember the name of the first place I stayed, but the second place was called Home2 and is NICE. The laundry facilities were top notch (see images below), the staff were very nice, breakfast was decent every morning. The pool was quite cool.  It’s outside, and saline instead of chlorine. My only regret is that there was no hot tub.

I mostly didn’t go Out. I was at work all day, and mostly worked or studied in my apartment all evening.  One evening I did go out with my boss Travis and 3 guys all named Nate to a place called Perry’s Steakhouse. I had absolutely without question the best steak I’ve ever had in my life. It was absolutely incredible.

I’m sure I’ll get back there, my office is there and some friends are there. I’ll try to do more pictures then.

Here are some pictures from the trip.

Gallery 2

The post Austin, Texas appeared first on As it were....

by topher at September 14, 2018 12:52 AM

September 08, 2018

Whitemice Consulting

Reading BYTE Fields From An Informix Unload

Exporting records from an Informix table is simple using the UNLOAD TO command. This creates a delimited text file with a row for each record and the fields of the record delimited by the specified delimiter. Useful for data archive the files can easily be restored or processed with a Python script.

One complexity exists; if the record contains a BYTE (BLOB) field the contents are dumped hex encoded. This is not base64. To read these files take the hex encoded string value and decode it with the faux code-page hex: content.decode("hex")

The following script reads an Informix unload file delimited with pipes ("|") decoding the third field which was of the BYTE type.

rfile = open(ARCHIVE_FILE, 'r')
counter = 0
row = rfile.readline()
while row:
    counter += 1
    print(
        'row#{0} @ offset {1}, len={2}'
        .format(counter, rfile.tell(), len(row), )
    )
    blob_id, content, mimetype, filename, tmp_, tmp_ = row.split('|')
    content = content.decode("hex")
    print('  BLOBid#{0} "{1}" ({2}), len={3}'.format(
        blob_id, filename, mimetype, len(content)
    ))
    if mimetype == 'application/pdf':
        if '/' in filename:
            filename = filename.replace('/', '_')
        wfile = open('wds/{0}.{1}.pdf'.format(blob_id, filename, ), 'wb')
        wfile.write(content)
        wfile.close()

by whitemice at September 08, 2018 08:05 PM

July 03, 2018

As it were ...

A Dream Job?

You may recall that in January of 2017 I started a grand experiment with Tanner Moushey. As experiments go, it was a great success, which is to say we learned a lot. As businesses go, it lasted until Feb of 2018. It was a great experience, and I learned a lot, and it paid the bills for a year, but as any entrepreneur will tell you, it’s a stressful life.

So after February I started looking for a Real Job. I applied to a number of places that didn’t even respond (one of which had approached me first!). I did two trials at Automattic and washed out of both of them. That was a great learning experience as well.

Spring faded into summer, and I was doing contract work to keep bread on the table, but that was getting old.

Then one night at 10pm a couple weeks ago my friend Luke sent me a Slack note, saying he knew of a large company looking for a WordPress evangelist, would I be interested? If you know anything about me then you know I was immediately interested.

He told me a little about it on the spot, but he was in a meeting with them in Sydney at the time (hence 10pm my time). I was a little wary at first. This sounded REALLY good, and I’d already been disappointed by other things this summer.

The next morning I sent an email to The Guy at The Company and we arranged to talk when he got back to Austin.  He basically went from plane ride from Sydney to a meeting with me to jury duty, all in one day. Iron man.

We talked for about 30 min and they said they were sending me an offer as quickly as possible.  Five days later I had an offer and accepted it!

So now I’m the WordPress Developer Evangelist for BigCommerce.  “But wait!” you say. “They don’t do WordPress do they?”.  For the unaware, BigCommerce is a hosted ecommerce solution. You sign up, pay the fee, and *poof* you have a store. Well, recently they decided to get into WordPress, big time. You can read about it here and here.

I’m crazy excited of course. I’ve been looking for a WordPress evangelist job for years, but beyond that I’m also really excited about the product. I know who built it, and I know who’s code reviewing it. I’ve been assured by people I trust that they’re putting the appropriate time and money into this project, and it should be really really solid. The number of good WordPress ecommerce plugins is really low, and some serious competition will only be a good thing I think.

So maybe I’ll be seeing you at a WordCamp soon! Feel free to ask me all the questions.

The post A Dream Job? appeared first on As it were....

by topher at July 03, 2018 02:56 PM

May 29, 2018

Whitemice Consulting

Disabling Transparent Huge Pages in CentOS7

The THP (Transparent Huge Pages) feature of modern LINUX kernels is a boon for on-metal servers with a sufficiently advanced MMU. However they can also result in performance degradation and inefficiently memory use when enabled in a virtual machine [depending on the hypervisor and hosting provider]. See, for example "Use of large pages can cause memory to be fully allocated". If you are issues in a virtualized environment that point towards unexplained memory consumption it may be worthwhile to experiment with disabling THP in your guests. These are instructions for controlling the THP feature through the use of a SystemD unit.

Create the file /etc/systemd/system/disable-thp.service:

[Unit]
Description=Disable Transparent Huge Pages (THP)
[Service]
Type=simple
ExecStart=/bin/sh -c "echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled && echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag"
[Install]
WantedBy=multi-user.target

Enable the new unit:

sudo systemctl daemon-reload
sudo systemctl start disable-thp
sudo systemctl enable disable-thp

THP will now be disabled. However already allocated huge pages are still active. Rebooting the server is advised to bring up the services with THP disabled.

by whitemice at May 29, 2018 07:30 PM

May 06, 2018

Whitemice Consulting

Informix Dialect With CASE Derived Polymorphism

I ran into an interesting issue when using SQLAlchemy 0.7.7 with the Informix dialect. In a rather ugly database (which dates back to the late 1980s) there is a table called "xrefr" that contains two types of records: "supersede" and "cross". What those signify doesn't really matter for this issue so I'll skip any further explanation. But the really twisted part is that while a single field distinquishes between these two record types - it does not do so based on a consistent value. If the value of this field is "S" then the record is a "supersede", any other value (including NULL) means it is a "cross". This makes creating a polymorphic presentation of this schema a bit more complicated. But have no fear, SQLAlchemy is here!

When faced with a similar issue in the past, on top of PostgreSQL, I've created polymorphic presentations using CASE clauses. But when I tried to do this using the Informix dialect the generated queries failed. They raised the dreaded -201 "Syntax error or access violation" message.

The Informix SQLCODE -201 is in the running for "Most useless error message ever!". Currently it is tied with PHP's "Stack Frame 0" message. Microsoft's "File not found" [no filename specified] is no longer in the running as she is being held at the Hague to face war crimes charges.

Rant: Why do developers get away with such lazy error messages?

The original [failing] code that I tried looked something like this:

    class XrefrRecord(Base):
        __tablename__  = 'xrefr'
        record_id      = Column("xr_serial_no", Integer, primary_key=True)
        ....
        _supersede     = Column("xr_supersede", String(1))
        is_supersede   = column_property( case( [ ( _supersede == 'S', 1, ), ],
                                                else_ = 0 ) )

        __mapper_args__ = { 'polymorphic_on': is_supersede }   


    class Cross(XrefrRecord): 
        __mapper_args__ = {'polymorphic_identity': 0} 


    class Supsersede(XrefrRecord): 
        __mapper_args__ = {'polymorphic_identity': 1}

The generated query looked like:

      SELECT xrefr.xr_serial_no AS xrefr_xr_serial_no,
             .....
             CASE
               WHEN (xrefr.xr_supersede = :1) THEN :2 ELSE :3
               END AS anon_1
      FROM xrefr
      WHERE xrefr.xr_oem_code = :4 AND
            xrefr.xr_vend_code = :5 AND
            CASE
              WHEN (xrefr.xr_supersede = :6) THEN :7
              ELSE :8
             END IN (:9) &lt;--- ('S', 1, 0, '35X', 'A78', 'S', 1, 0, 0)

At a glance it would seem that this should work. If you substitute the values for their place holders in an application like DbVisualizer - it works.

The condition raising the -201 error is the use of place holders in a CASE WHEN structure within the projection clause of the query statement; the DBAPI module / Informix Engine does not [or can not] infer the type [cast] of the values. The SQL cannot be executed unless the values are bound to a type. Why this results in a -201 and not a more specific data-type related error... that is beyond my pay-grade.

An existential dilemma: Notice that when used like this in the projection clause the values to be bound are both input and output values.

The trick to get this to work is to explicitly declare the types of the values when constructing the case statement for the polymorphic mapper. This can be accomplished using the literal_column expression.

    from sqlalchemy import literal_column

    class XrefrRecord(Base):
        _supersede    = Column("xr_supersede", String(1))
        is_supersede  = column_property( case( [ ( _supersede == 'S', literal_column('1', Integer) ) ],
                                                   else_ = literal_column('0', Integer) ) )

        __mapper_args__     = { 'polymorphic_on': is_supersede }

Visually if you log or echo the statements they will not appear to be any different than before; but SQLAlchemy is now binding the values to a type when handing the query off to the DBAPI informixdb module.

Happy polymorphing!

by whitemice at May 06, 2018 08:23 PM

Sequestering E-Mail

When testing applications one of the concerns is always that their actions don't effect the real-world. One aspect of that this is sending e-mail; the last thing you want is the application you are testing to send a paid-in-full customer a flurry of e-mails that he owes you a zillion dollars. A simple, and reliable, method to avoid this is to adjust the Postfix server on the host used for testing to bury all mail in a shared folder. This way:

  • You don't need to make any changes to the application between production and testing.
  • You can see the message content exactly as it would ordinarily have been delivered.

To accomplish this you can use Postfix's generic address rewriting feature; generic address rewriting processes addresses of messages sent [vs. received as is the more typical case for address rewriting] by the service. For this example we'll rewrite every address to shared+myfolder@example.com using a regular expression.

Step#1

Create the regular expression map. Maps are how Postfix handles all rewriting; a match for the input address is looked for in the left hand [key] column and rewritten in the form specified by the right hand [value] column.

echo "/(.)/           shared+myfolder@example.com" &gt; /etc/postfix/generic.regexp

Step#2

Configure Postfix to use the new map for generic address rewriting.

postconf -e smtp_generic_maps=regexp:/etc/postfix/generic.regexp

Step#3

Tell Postfix to reload its configuration.

postfix reload

Now any mail, to any address, sent via the hosts' Postfix service, will be driven not to the original address but to the shared "myfolder" folder.

by whitemice at May 06, 2018 08:11 PM

April 22, 2018

Whitemice Consulting

LDAP extensibleMatch

One of the beauties of LDAP is how simply it lets the user or application perform searching. The various attribute types hint how to intelligently perform searches such as case sensitivity with strings, whether dashes should be treated as relevant characters in the case of phone numbers, etc... However, there are circumstances when you need to override this intelligence and make your search more or less strict. For example: in the case of case sensitivity of a string. That is the purpose of the extensibleMatch.

Look at this bit of schema:

attributetype ( 2.5.4.41 NAME 'name'
EQUALITY caseIgnoreMatch
SUBSTR caseIgnoreSubstringsMatch
SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{32768} )
attributetype ( 2.5.4.4 NAME ( 'sn' 'surname' )
DESC 'RFC2256: last (family) name(s) for which the entity is known by'
SUP name )

The caseIgnoreMatch means that searches on attribute "name", or its descendant "sn" (used in the objectclass inetOrgPerson), are performed in a case insensitive manner. So...

estate1:~ # ldapsearch -Y DIGEST-MD5 -U awilliam sn=williams dn
SASL/DIGEST-MD5 authentication started
Please enter your password:
SASL username: awilliam
SASL SSF: 128
SASL installing layers
# Adam Williams, People, Entities, SAM, whitemice.org
dn: cn=Adam Williams,ou=People,ou=Entities,ou=SAM,dc=whitemice,dc=org
# Michelle Williams, People, Entities, SAM, whitemice.org
dn: cn=Michelle Williams,ou=People,ou=Entities,ou=SAM,dc=whitemice,dc=org

... this search returns two objects where the sn value is "Williams" even though the search string was "williams".

If for some reason we want to match just the string "Williams", and not the string "williams" we can use the extensibleMatch syntax.

estate1:~ # ldapsearch -Y DIGEST-MD5 -U awilliam "(sn:caseExactMatch:=williams)" dn
SASL/DIGEST-MD5 authentication started
Please enter your password:
SASL username: awilliam
search: 3
result: 0 Success
estate1:~ #

No objects found as both objects have "williams" with an initial capital letter.

Using extensibleMatch I was able to match the value of "sn" with my own preference regarding case sensitivity. The system for an extensibleMatch is "({attributename}:{matchingrule}:{filterspec})". This can be used inside a normal LDAP filter along with 'normal' matching expressions.

For more information on extensibleMatch see RFC2252 and your DSA's documentation [FYI: Active Directory is a DSA (Directory Service Agent), as is OpenLDAP, or

by whitemice at April 22, 2018 03:14 PM

Android, SD cards, and exfat

I needed to prepare some SD cards for deployment to Android phones. After formatting the first SD card in a phone I moved it to my laptop and was met with the "Error mounting... unknown filesystem type exfat" error. That was somewhat startling as GVFS gracefully handles almost anything you throw at it. Following this I dropped down to the CLI to inspect how the SD card was formatted.

awilliam@beast01:~> sudo fdisk -l /dev/mmcblk0
Disk /dev/mmcblk0: 62.5 GiB, 67109912576 bytes, 131074048 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device         Boot Start       End   Sectors  Size Id Type
/dev/mmcblk0p1 *     2048 131074047 131072000 62.5G  7 HPFS/NTFS/exFAT

Seeing the file-system type I guessed that I was missing support for the hack that is exFAT [exFAT is FAT tweaked use on large SD cards]. A zypper search exfat found two uninstalled packages; GVFS is principally an encapsulation of fuse that adds GNOME awesome into the experience - so the existence of a package named "fuse-exfat" looked promising.

I installed the two related packages:

awilliam@beast01:~> sudo zypper in exfat-utils fuse-exfat
(1/2) Installing: exfat-utils-1.2.7-5.2.x86_64 ........................[done]
(2/2) Installing: fuse-exfat-1.2.7-6.2.x86_64 ........................[done]
Additional rpm output:
Added 'exfat' to the file /etc/filesystems
Added 'exfat_fuse' to the file /etc/filesystems

I removed the SD card from my laptop, reinserted it, and it mounted. No restart of anything required. GVFS rules! At this point I could move forward with rsync'ing the gigabytes of documents onto the SD card.

It is also possible to initially format the card in the openSUSE laptop as well. Partition the card creating a partition of type "7" and then use mkfs.exfat to format the partition. Be careful to give each card a unique ID using the -n option.

awilliam@beast01:~> sudo mkfs.exfat  -n 430E-2980 /dev/mmcblk0p1
mkexfatfs 1.2.7
Creating... done.
Flushing... done.
File system created successfully.

The mkfs.exfat command is provided by the exfat-utils package; a filesystem-utils package exists for most (all?) supported file-ystems. These -utils packages provide the various commands to create, check, repair, or tune the eponymous file-ystem type.

by whitemice at April 22, 2018 02:34 PM

April 03, 2018

Whitemice Consulting

VERR_PDM_DEVHLPR3_VERSION_MISMATCH

After downloading a Virtualbox ready ISO of OpenVAS the newly created virtual machine to host the instance failed to start with an VERR_PDM_DEVHLPR3_VERSION_MISMATCH error. The quick-and-dirty solution was to set the instance to use USB 1.1. This setting is changed under Machine -> Settings -> USB -> Select USB 1.1 OHCI Controller.. After that change the instance now boots and runs the installer.

virtualbox-qt-5.1.34-47.1.x86_64
virtualbox-5.1.34-47.1.x86_64
virtualbox-host-kmp-default-5.1.34_k4.4.120_45-47.1.x86_64
kernel-default-4.4.120-45.1.x86_64
openSUSE 42.3 (x86_64)

by whitemice at April 03, 2018 12:21 PM

March 11, 2018

Whitemice Consulting

AWESOME: from-to Change Log viewer for PostgreSQL

Upgrading a database is always a tedious process - a responsible administrator will have to read through the Changelog for every subsequent version from the version ze is upgrading from to the one ze is upgrading to.

Then I found this! This is a Changelog viewer which allows you to select a from and a to version and shows you all the changelogs in between; on one page. You still have to read it, of course, but this is a great time saver.

by whitemice at March 11, 2018 01:15 AM

February 09, 2018

As it were ...

Get Array neighbors in PHP

I recently had an issue where I had a custom post type of Artist, and another of Artwork. When looking at a single piece of Artwork, I used posts2posts to get the related Artist, and then I also did a query to get an array of all of the other Artwork by that Artist. I used that array to render them as thumbnails below the main Artwork.

The related Artwork array really isn’t sorted in any way. It’s a standard post array, with incremental keys.

I needed to put links on the page to Previous and Next Artworks, like this:

Screenshot showing next and prev links.

Initially I used WordPress’ built in functions for previous and next post, but that relied on the chronology of all Artworks, irrespective of Artist, so they immediately left the current Artist and went to something unrelated.

To get the array I wanted, I took my standard posts array and did this:

// get a list of all of the IDs of that other art
$art_list = wp_list_pluck( $connected_art, 'post_title', 'ID' );

which got me a very concise array of array keys matching my post IDs. The post_title is a red herring, I don’t use it.

I needed to take my Art array and get the ID of the post on either side of the current Artwork. I looked at prev() and next() but messing with the array pointer doesn’t work in a for loop, so it was a pain.

I found some code in the comments for the next() function that came close to what I wanted, but left some things to be desired. So I used it as a base and ended up with the function below.

/**
 * Function to get array keys on either side of a given key. If the
 * initial key is first in the array then prev is null. If the initial
 * key is last in the array, then next is null.
 *
 * If wrap is true and the initial key is last, then next is the first
 * element in the array.
 *
 * If wrap is true and the initial key is first, then prev is the last
 * element in the array.
 *
 * @param array $arr
 * @param string $key
 * @param bool $wrap
 *
 * @return array $return
 */
function array_neighbor( $arr, $key, $wrap = false ) {

	krsort( $arr );
	$keys       = array_keys( $arr );
	$keyIndexes = array_flip( $keys );

	$return = array();
	if ( isset( $keys[ $keyIndexes[ $key ] - 1 ] ) ) {
		$return['prev'] = $keys[ $keyIndexes[ $key ] - 1 ];
	} else {
		$return['prev'] = null;
	}

	if ( isset( $keys[ $keyIndexes[ $key ] + 1 ] ) ) {
		$return['next'] = $keys[ $keyIndexes[ $key ] + 1 ];
	} else {
		$return['next'] = null;
	}

	if ( false != $wrap && empty( $return['prev'] ) ) {
		$end            = end( $arr );
		$return['prev'] = key( $arr );
	}

	if ( false != $wrap && empty( $return['next'] ) ) {
		$beginning      = reset( $arr );
		$return['next'] = key( $arr );
	}

	return $return;
}

Then you get your data with something like this, where $current_art is just the current post ID.

// grab the IDs of the art on either side of this one
$art_neighbors = array_neighbor( $art_list, $current_art, true );

The output looks like this:

Array
(
    [prev] => 2257
    [next] => 2253
)

Those are post IDs, so I was able to simply drop those into get_permalink() for my next/prev links.

The post Get Array neighbors in PHP appeared first on As it were....

by topher at February 09, 2018 02:57 PM

January 17, 2018

Whitemice Consulting

Discovering Informix Version Via SQL

It is possible using the dbinfo function to retrieve the engine's version information via an SQL command:

select dbinfo('version','full') from sysmaster:sysdual

which will return a value like:

IBM Informix Dynamic Server Version 12.10.FC6WE
Tags: 

by whitemice at January 17, 2018 08:56 PM

December 28, 2017

OpenGroupware (Legacy and Coils)

ODBC Support Added To OIE

As of OpenGroupware Coils 0.1.49r112 support for ODBC data sources has been integrated into OIE. These SQL data sources are defined in the OIESQLSources just as PostgreSQL and Informix database connections are. This feature requires the pyodbc module to be installed. The availability of this module on your workflow node can be verified using the coils-dependency-check tool.

[ogo@workflow.coils.example.com ~]# coils-dependency-check 
OK: Module markdown (Markdown rendering, required for /wiki protocol) available.
OK: Module vobject (vCard and vEvent parsing) available.
OK: Module zope.interface (ZOPE Interfaces for RML engine) available.
OK: Module xlrd (XLS<2007 read support) available.
OK: Module pycups (IPP printing support) available.
OK: Module paramiko (SSH suppport.) available.
OK: Module dateutil (Date & Time Arithmatic) available.
OK: Module lxml (SAX & DOM XML Processing) available.
OK: Module Pillow (Python Imaging Library) available.
OK: Module psycopg2 (PostgreSQL RDBMS connectivity) available.
OK: Module base64 (Encode and decode Base64 data) available.
OK: Module yaml (YAML parser & serializer) available.
OK: Module pyodbc (ODBC SQL connectivity) available.   <<<<<<<<<<
OK: Module xlwt (XLS<2007 write support) available.
OK: Module sqlalchemy (Object Relational Modeling) available.
OK: Module pytz (Python Time Zone tables) available.
OK: Module smbc (SMB/CIFS integration) available.
OK: Module argparse (Enhanced argument parsing, required for /wiki protocol) available.
OK: Module ijson (Streaming JSON parser, requires libyajl) available.
OK: Module z3c.rml (RML Generator, also requires "zope.interface") available.
OK: Module coils.foundation.api.elementflow (Streaming XML Creation) available.
OK: Module coils.foundation.api.pypdf (Simple PDF Operations) available.
OK: Module untangle (XML parsing) available.
OK: Module gnupg (GPG/PGP suppport.) available.
OK: Module informixdb (Informix RDBMS connectivity) available.

The principle use for the ODBC connection is to connect to M$-SQL database engines. In order to make ODBC connections the proper ODBC driver must be installed on the node and properly configured.

ODBC database connections are defined in the OIESQLSources configuration directive just as with PostgreSQL and Informix database connections. The driver must be "odbc" and the parameter "DSN" be the complete ODBC connection string.

coils-server-config --directive=OIESQLSources  --value='{
....
  "acumaticaMVP1": {"driver": "odbc",
                    "DSN": "DSN=AcumaticaDB;UID=oie-workflow-account;PWD=XXXXXXXXXXXXXXXXXXXXXXXXXX"}
}'

A defined connection can be tested using coils-test-sql utility. This test is best performed form the node hosting the coils.workflow.executor component.

[ogo@workflow.coils.example.com ~]# coils-test-sql --name=acumaticaMVP1 --table=ARInvoice
Store root is /var/lib/opengroupware.org
Connected to SQL "acumaticaMVP1"
  Select Table: "ARInvoice"

If the connection works then it is ready to be used from workflow actions like sqlSelectAction and sqlExecuteAction.

by whitemice at December 28, 2017 02:43 PM

October 09, 2017

Whitemice Consulting

Failure to apply LDAP pages results control.

On a particular instance of OpenGroupware Coils the switch from an OpenLDAP server to an Active Directory service - which should be nearly seamless - resulted in "Failure to apply LDAP pages results control.". Interesting, as Active Directory certainly supports paged results - the 1.2.840.113556.1.4.319 control.

But there is a caveat! Of course.

Active Directory does not support the combination of the paged control and referrals in some situations. So to reliably get the page control enable it is also necessary to disable referrals.

...
dsa = ldap.initialize(config.get('url'))
dsa.set_option(ldap.OPT_PROTOCOL_VERSION, 3)
dsa.set_option(ldap.OPT_REFERRALS, 0)
....

Disabling referrals is likely what you want anyway, unless you are going to implement referral following. Additionally, in the case of Active Directory the referrals rarely reference data which an application would be interested in.

The details of Active Directory and pages results + referrals can be found here

by whitemice at October 09, 2017 03:03 PM

August 31, 2017

Whitemice Consulting

opensuse 42.3

Finally got around to updating my work-a-day laptop to openSUSE 42.3. As usual I did an in-place distribution update via zypper. This involves replacing the previous version repositories with the current version repositories - and then performing a dup. And as usual the process was quick and flawless. After a reboot everything just-works and I go back to doing useful things. This makes for an uninteresting BLOG post, which is as it should be.

zypper lr --url
zypper rr http-download.opensuse.org-f7da6bb3
zypper rr packman
zypper rr repo-non-oss
zypper rr repo-oss
zypper rr repo-update-non-oss
zypper rr repo-update-oss
zypper rr server:mail
zypper ar http://download.opensuse.org/distribution/leap/42.3/repo/non-oss/ repo-non-oss
zypper ar http://download.opensuse.org/distribution/leap/42.3/repo/oss/ repo-oss
zypper ar http://download.opensuse.org/repositories/server:/mail/openSUSE_Leap_42.3/ server:mail
zypper ar http://download.opensuse.org/update/leap/42.3/non-oss/ repo-update-non-oss
zypper ar http://download.opensuse.org/update/leap/42.3/oss/ repo-update-oss
zypper ar http://ftp.gwdg.de/pub/linux/misc/packman/suse/openSUSE_Leap_42.3 packman
zypper lr --url  # double check
zypper ref  # refesh
zypper dup --download-in-advance  # distribution update
zypper up  # update, just a double check
reboot

Done.

by whitemice at August 31, 2017 12:49 PM

August 07, 2017

OpenGroupware (Legacy and Coils)

An Introduction to OIE Tables

The OIE Table entity provides a simple means to embed look-ups, filters, and translations into workflows. The principle of a Table is that it always receives a value and returns a value - a look-up.

Table definitions are presented via WebDAV in the /dav/Workflow/Tables folder as simple YAML files; they can be created and edited using your favorite text editor. If you are familiar with OIE Format definitions the Table definition should seem very familiar. Tables are identified by their unique name which is specified by the name attribute of their YAML description.

StaticLookupTable

The static look-up table provides a method to do simple recoding of data without relying on external data-sources such as an LDAP DSA or SQL RDBMS. The definition of a StaticLookupTable provides a values dictionary where input values are looked up and the corresponding value returned. The optional defaultValue directive may specify a value to be returned if the input value is not found in the values table; if no defaultValue is specified the table will return a None.

class: StaticLookupTable
defaultValue: 9
values: { 'ME1932': 4,
          'Kalamazoo': 'abc' }
name: TestStaticLookupTable

Text 1: A StaticLookupTable that returns 4 for the input value "ME1932", and "abc" for the input value "Kalamazoo". Any other input value results in the value 9.

PresenceLookupTable

A presence look-up table contains a list of static values. It returns a specified value if the input value matches one of the values stored in the table; otherwise it returns an alternative value. Presence look-up tables are most commonly used when a small and known set of values needs to used to filter a set of data.

class: PresenceLookupTable
name: BankeCodeExclusionTable
returnValueForFalse: true
returnValueForTrue: false
values: [ME1932, Kalamazoo, 123]

Text 2: A PresenceLookupTable that returns boolean false for the input values "ME1932", "Kalamazo", and 123; returns boolean true for all other input values.

SQLLookupTable

An SQLLookupTable permits the translation or look-up of values using an SQL data source defined in the OIESQLSources server default. The table definition must at the minimum define SQLQueryText and SQLDataSourceName directives. Within the SQLQueryText value the "?" is substituted for the input value; the first column of the query result is the return value of the table. If the query identifies no rows then a None value is returned from the table.

SQLDataSourceName: mydbconnection
SQLQueryText: 'SELECT CASE WHEN COUNT(*) = 0 THEN
    ''True'' ELSE ''False'' END  FROM bank_code_exclusion WHERE bank_code
    = ? AND ex_service_followup = ''Y'';'
class: SQLLookupTable
doInputStrip: true
doInputUpper: true
doOutputStrip: false
doOutputUpper: false
name: ServiceFollowUpExclusionTable,
useSessionCache: true

Text 3: An example SQLLookup table which uses the data-source "mydbconnection" as defined in the OIESQLDataSources server default.

The optional directives: doInputStrip, doInputUpper, doOutputStrip, and doOuputUpper, which all default to false, allow the input and the output values to be changed to upper case and stripped of white-space. Converting a value to upper case may be useful in the case where a database backend itself does not support case-insensitive compare. Trimming whitespace on input values can protect from attempting to look-up padded strings and output trimming is useful for database engines that always return strings values defined like CHAR(30) as padded values.

Using Tables

In Python code using a table is as simple as loading the class and calling the lookup_value method. However the Table performs the look-up is entirely encapsulated in appropriate Table class [SQLLookupTable, StaticLookupTable, ...]

table = Table.Load(name)
return table.lookup_value(value)

Text 4: How to use a table to look-up values in Python code.

More commonly Table lookups are going to be performed within workflow actions such as maps and transforms. When performing an XSLT transform using any table is available via the tablelookup OIE extension method; this allows values from the input stream to be easily used as lookup-values facilitating translation of ERP and other codes/abbreviations between disparate applications.

<xsl:template match="row">
</xsl:template>
<xsl:if test="total_charges>1000">
</xsl:if>
<xsl:variable name="include" select="oie:tablelookup('ServiceFollowUpExclusionTable',string(bank_code))"/>
<xsl:if test="$include='True'">
<row>

Text 5: This snippet of an XSLT transform demonstrates how to use a Table lookup from with an stylesheet.

Overall Tables provide a simple and elegant way to automate all the codes that need to be inserted and translated in the wide variety of documents processed by the workflow engine as well as providing a means to easily implement dynamic filtering.

Author: Adam Tauno Williams

by whitemice at August 07, 2017 10:25 AM

Invoking an OIE Route from PHP

The repository not contains a PHP class making it simple to invoke an OIE workflow from PHP. See the oie.php file. Using the OIEProcess class defined in the file processes can be created and the process id and input message UUID known.

$HTTPROOT   = "http://coils.example.com";
$ROUTENAME  = "TEST_MailBack";
$PARAMETERS = array('myParameter'=>'YOYO MAMA', 'otherParam'=>4);
$request = new OIEProcess($HTTPROOT, $ROUTENAME, $PARAMETERS);
if ($request->start('adam', '*******', fopen('/etc/passwd', 'r'), 'text/plain') == 'OK') {
    echo "\n";
    echo "Process ID: " . $request->get_process_id() . "\n";
    echo "Message UUID: " . $request->get_message_id() . "\n";    
}

The start method returns either "OK", "OIEERR" (OIE refused the request), or "SYSERR" (the curl operation failed). The first and second parameters for start is the user credentials, the optional third and fourth parameters is the input message stream and the payload mimetype. If no mimetype is specified a default of "application/octet-stream" is assumed.

Author: Adam Tauno Williams

by whitemice at August 07, 2017 10:10 AM

The SMTP Listener

Similar to the coils.workflow.9100 service that can deliver raw socket connections into defined workflows OpenGroupware Coils also provides an SMTP listener. The listener enables workflows to receive messages via SMTP; simply configure your MTA (Mail Transfer Agent) to route some prefix such as "ogo" to your OpenGroupware Coils instance and then use plussed address syntax to deliver e-mail messages to specific objects.

Workflows can be invoked using ogo+wf+routeName@ syntax; for example to send an e-mail message to the workflow named ExampleStatusUpdate a message would be sent to ogo+wf+examplestatusupdate@example.com. In the case of a workflow the text/plain body of the message will become the input message for a new instance (Process) of that route. A ticket is open to implement support for receiving specific MIME-types attached to a message as the process' input message.

To target an entity with a message, assuming your delivery is routing ogo@ to the OpenGroupware Coils listener, send a message to ogo+objectId@example.com where objectId is the numeric object id of the entity. In most cases the entity must allow read access to the NetworkContext (OGo#8999) via an ACE in the object's ACL. OpenGroupware Coils network service components interact with the object model with the NetworkContext, this security context has minimal access to the server's objects and content for obvious security reasons. Additional access must be deliberately granted to allow unathenticated services such as the socket and SMTP listener to interact with an object.

Document folders are one entity that supports receiving messages from the SMTP listener. In order to access the folder the NetworkContext must have read access to the folder entity and in order to actually store content in the folder it must have write access. For this reason it is recommended that a specific folder be created in a project for the purpose of receiving SMTP messages; from that folder a user, application, or workflow can relocate and possibly rename the documents.

For example, a message sent to ogo+1234567@example.com, where OGo#1234567 is a document folder to which NetworkContexts has read/write permissions, will be stored in its raw form in that folder. Most document-oriented applications however cannot easily deal with raw e-mail messages [after all, they aren't e-mail clients]. Perhaps what you really need is some document that is attached to these e-mail messages? This is a common use case with document centers - they scan documents into PDF and delivery them via SMTP. In order to facilitate this use-case and to streamline document management the user or application can define the MIME type of the documents the folder should receive. If a MIME type is defined for SMTP collection on a folder than only that type of document, attached to a received message, will be saved - the attachments will be automatically saved from the message and the message itself discarded.

In order to define a MIME-type for SMTP collection on a folder create an object property in the http://www.opengroupware.us/smtp namespace having an attribute name of collectMIMEType. The value of that property should be the MIME-type you desire to collection. For example, if {http://www.opengroupware.us/smtp}collectMIMEType was defined on OGo#1234567 [from our previous example] having a value of "application/pdf" then only PDF attachments would saved to the folder. There are two special-case MIME-types:

  • message/rfc822 - This is the default type, and just as if the object property were not defined, will cause incoming messages to be saved in their entirety.
  • text/plain - This value will save the text/plain message body as a document in the folder.

On every document created by the SMTP listener a set of object properties will be created. These properties correspond to headers in the e-mail message from which the document was created; if a corresponding header does not exist in the e-mail message than the corresponding object property will not be created. The SMTP listener defines a set of interesting headers; if you believe there are headers that should be captured but are not included in this list feel free to request the addition of the header via the projects' ticket application of SourceForge.

The currently defined list of object properties created from message headers are:

  • {us.opengroupware.mail.header}subject
  • {us.opengroupware.mail.header}x-spam-level
  • {us.opengroupware.mail.header}from
  • {us.opengroupware.mail.header}to
  • {us.opengroupware.mail.header}date
  • {us.opengroupware.mail.header}x-spam-status
  • {us.opengroupware.mail.header}reply-to
  • {us.opengroupware.mail.header}x-virus-scanned
  • {us.opengroupware.mail.header}x-bugzilla-classification
  • {us.opengroupware.mail.header}x-bugzilla-product
  • {us.opengroupware.mail.header}x-bugzilla-component
  • {us.opengroupware.mail.header}x-bugzilla-severity
  • {us.opengroupware.mail.header}x-bugzilla-status
  • {us.opengroupware.mail.header}x-bugzilla-url
  • {us.opengroupware.mail.header}x-mailer
  • {us.opengroupware.mail.header}x-original-sender
  • {us.opengroupware.mail.header}mailing-list
  • {us.opengroupware.mail.header}list-id
  • {us.opengroupware.mail.header}x-opengroupware-regarding
  • {us.opengroupware.mail.header}x-opengroupware-objectid
  • {us.opengroupware.mail.header}x-original-to
  • {us.opengroupware.mail.header}in-reply-to
  • {us.opengroupware.mail.header}cc
  • {us.opengroupware.mail.header}x-gm-message-state
  • {us.opengroupware.mail.header}message-id

All documents created will have at least the property {us.opengroupware.mail.header}message-id as Message-ID is a required header [per RFC822]. The SMTP component will not process a message that lacks a Message-ID header. The Message-ID and a timestamp are used to create the documents filename.

In addition to these properties the property {http://www.opengroupware.us/mswebdav}contentType used by the WebDAV presentation will also be set on created documents to store the original MIME-type.

These properties can be used to correlate or qualify the documents, and [of course] can be used as search qualifications when using zOGI's searchForObjects.

Document creation by SMTP provides for a very simple integration path with innumerable both consumer and enterprise level devices. From there your applications can easily access the documents by zOGI (JSON-RPC or XML-RPC), AttachFS (REST), or WebDAV.

Author: Adam Tauno Williams

by whitemice at August 07, 2017 10:06 AM

June 06, 2017

Whitemice Consulting

LDAP Search For Object By SID

All the interesting objects in an Active Directory DSA have an objectSID which is used throughout the Windows subsystems as the reference for the object. When using a Samba4 (or later) domain controller it is possible to simply query for an object by its SID, as one would expect - like "(&(objectSID=S-1-...))". However, when using a Microsoft DC searching for an object by its SID is not as straight-forward; attempting to do so will only result in an invalid search filter error. Active Directory stores the objectSID as a binary value and one needs to search for it as such. Fortunately converting the text string SID value to a hex string is easy: see the guid2hex(text_sid) below.

import ldap
import ldap.sasl
import ldaphelper

PDC_LDAP_URI = 'ldap://pdc.example.com'
OBJECT_SID = 'S-1-5-21-2037442776-3290224752-88127236-1874'
LDAP_ROOT_DN = 'DC=example,DC=com'

def guid2hex(text_sid):
    """convert the text string SID to a hex encoded string"""
    s = ['\\{:02X}'.format(ord(x)) for x in text_sid]
    return ''.join(s)

def get_ldap_results(result):
    return ldaphelper.get_search_results(result)

if __name__ == '__main__':

    pdc = ldap.initialize(PDC_LDAP_URI)
    pdc.sasl_interactive_bind_s("", ldap.sasl.gssapi())
    result = pdc.search_s(
        LDAP_ROOT_DN, ldap.SCOPE_SUBTREE,
        '(&(objectSID={0}))'.format(guid2hex(OBJECT_SID), ),
        [ '*', ]
    )
    for obj in [x for x in get_ldap_results(result) if x.get_dn()]:
        """filter out objects lacking a DN - they are LDAP referrals"""
        print('DN: {0}'.format(obj.get_dn(), ))

    pdc.unbind()

by whitemice at June 06, 2017 12:11 AM

April 08, 2017

As it were ...

Why I no longer hate GoDaddy

There was a time when I said “never GoDaddy”. I turned down contracts when the client wanted to be hosted on GoDaddy, and wouldn’t budge. Over the last few years my attitude has changed pretty dramatically. I’m happy to work with GoDaddy now, and I like what they’re doing as a company.

Recently a friend tweeted this:

That is absolutely a fair question, and I think one that deserves a better answer than a tweet back, so this post is intended to be that answer.

Why I Didn’t Like GoDaddy

My primary reason was their choice to use sex as a marketing tool. Every commercial made me cringe. I felt so sad that NASCAR’s first serious female contender was cast as someone sexy rather than someone with amazing accomplishments. There was so much opportunity there to inspire young women and girls with the idea that they can break cultural norms.

A secondary reason was the lifestyle of the owner. He simply made choices I don’t like. Lots of people do, and that’s fine, but I made the choice not to use his product.

There were also some tech issues I didn’t like.  For a long time you couldn’t get shell for example. That annoyed me like crazy.

Lastly, they were the biggest player. I always root for the underdog.

What Changed

The real change came when key people inside GoDaddy decided the company was doing harmful things, and decided to do something about it. The owner sold the company and took a smaller and smaller role in controlling the company until he was simply gone.

At that point the opportunity existed to take a higher road, and they did it. The sex came out of the commercials. There are now more women than men in positions of authority inside the company.

In general things have really turned around.

What Doesn’t Matter

I recently heard someone bad mouth GoDaddy, and then someone else jump in and say “How can you hate GoDaddy?  Mendel Kurland is such a cool guy!” For the unaware, Mendel works there. And he is a cool guy, I like him a lot. I have other friends that work there too.

None of that matters. My beef wasn’t with individual people there, but corporate direction.

So Everything’s Perfect?

No. There are still things I don’t like about GoDaddy. But those things are in the same class as things I don’t like about every host as well. They’re not using protocol X, or they meddle too much in the site creation, or whatever. They’re not anything that I would feel like I need to apologize to my daughter for.

In Summary

In the past I’ve been vocal about “never GoDaddy”. I’m not that way anymore.

The post Why I no longer hate GoDaddy appeared first on As it were....

by topher at April 08, 2017 10:13 PM

March 07, 2017

Whitemice Consulting

KDC reply did not match expectations while getting initial credentials

Occasionally one gets reminded of something old.

[root@NAS04256 ~]# kinit adam@example.com
Password for adam@Example.Com: 
kinit: KDC reply did not match expectations while getting initial credentials

Huh.

[root@NAS04256 ~]# kinit adam@EXAMPLE.COM
Password for adam@EXAMPLE.COM:
[root@NAS04256 ~]# 

In some cases the case of the realm name matters.

by whitemice at March 07, 2017 02:18 PM

February 09, 2017

Whitemice Consulting

The BOM Squad

So you have a lovely LDIF file of Active Directory schema that you want to import using the ldbmodify tool provided with Samba4... but when you attempt the import it fails with the error:

Error: First line of ldif must be a dn not 'dn'
Modified 0 records with 0 failures

Eh? @&^$*&;@&^@! It does start with a dn: attribute it is an LDIF file!

Once you cool down you look at the file using od, just in case, and you see:

0000000   o   ;   ?   d   n   :  sp   c   n   =   H   o   r   d   e   -

The first line does not actually begin with "dn:" - it starts with the "o;?". You've been bitten by the BOM! But even opening the file in vi you cannot see the BOM because every tool knows about the BOM and deals with it - with the exception of anything LDIF related.

The fix is to break out dusty old sed and remove the BOM -

sed -e '1s/^\xef\xbb\xbf//' horde-person.ldf  > nobom.ldf

And double checking it with od again:

0000000   d   n   :  sp   c   n   =   H   o   r   d   e   -   A   g   o

The file now actually starts with a "dn" attribute!

by whitemice at February 09, 2017 12:09 PM

Installation & Initialization of PostGIS

Distribution: CentOS 6.x / RHEL 6.x

If you already have a current version of PostgreSQL server installed on your server from the PGDG repository you should skip these first two steps.

Enable PGDG repository

curl -O http://yum.postgresql.org/9.3/redhat/rhel-6-x86_64/pgdg-centos93-9.3-1.noarch.rpm
rpm -ivh pgdg-centos93-9.3-1.noarch.rpm

Disable all PostgreSQL packages from the distribution repositories. This involves editing the /etc/yum.repos.d/CentOS-Base.repo file. Add the line "exclude=postgresql*" to both the "[base]" and "[updates]" stanzas. If you skip this step everything will appear to work - but in the future a yum update may break your system.

Install PostrgreSQL Server

yum install postgresql93-server

Once installed you need to initialize and start the PostgreSQL instance

service postgresql-9.3 initdb
service postgresql-9.3 start

If you wish the PostgreSQL instance to start with the system at book use chkconfig to enable it for the current runlevel.

chkconfig postgresql-9.3 on

The default data directory for this instance of PostgreSQL will be "/var/lib/pgsql/9.3/data". Note: that this path is versioned - this prevents the installation of a downlevel or uplevel PostgreSQL package destroying your database if you do so accidentally or forget to follow the appropriate version migration procedures. Most documentation will assume a data directory like "/var/lib/postgresql" [notably unversioned]; simply keep in mind that you always need to contextualize the paths used in documentation to your site's packaging and provisioning. Enable EPEL Repository

The EPEL repository provides a variety of the dependencies of the PostGIS packages provided by the PGDG repository.

curl -O http://epel.mirror.freedomvoice.com/6/x86_64/epel-release-6-8.noarch.rpm
rpm -Uvh epel-release-6-8.noarch.rpm

Installing PostGIS

The PGDG package form PostGIS should now install without errors.

yum install postgis2_93

If you do not have EPEL successfully enables when you attempt to install the PGDG PostGIS packages you will see dependency errors.

--->; Package postgis2_93-client.x86_64 0:2.1.1-1.rhel6 will be installed
--> Processing Dependency: libjson.so.0()(64bit) for package: postgis2_93-client-2.1.1-1.rhel6.x86_64
--> Finished Dependency Resolution
Error: Package: gdal-libs-1.9.2-4.el6.x86_64 (pgdg93)
           Requires: libcfitsio.so.0()(64bit)
Error: Package: gdal-libs-1.9.2-4.el6.x86_64 (pgdg93)
           Requires: libspatialite.so.2()(64bit)
Error: Package: gdal-libs-1.9.2-4.el6.x86_64 (pgdg93)
...

Initializing PostGIS

The template database "template_postgis" is expected to exist by many PostGIS applications; but this database is not created automatically.

su - postgres
createdb -E UTF8 -T template0 template_postgis
-- ... See the following note about enabling plpgsql ...
psql template_postgis
psql -d template_postgis -f /usr/pgsql-9.3/share/contrib/postgis-2.1/postgis.sql
psql -d template_postgis -f /usr/pgsql-9.3/share/contrib/postgis-2.1/spatial_ref_sys.sql 

Using the PGDG packages the PostgreSQL plpgsql embedded language, frequently used to develop stored procedures, is enabled in the template0 database from which the template_postgis database is derived. If you are attempting to use other PostgreSQL packages, or have built PostgreSQL from source [are you crazy?], you will need to ensure that this language is enabled in your template_postgis database before importing the scheme - to do so run the following command immediately after the "createdb" command. If you see the error stating the language is already enabled you are good to go, otherwise you should see a message stating the language was enabled. If creating the language fails for any other reason than already being enabled you must resolve that issue before proceeding to install your GIS applications.

$ createlang -d template_postgis plpgsql
createlang: language "plpgsql" is already installed in database "template_postgis"

Celebrate

PostGIS is now enabled in your PostgreSQL instance and you can use and/or develop exciting new GIS & geographic applications.

by whitemice at February 09, 2017 11:43 AM

February 03, 2017

Whitemice Consulting

Unknown Protocol Drops

I've seen this one a few times and it is always momentarily confusing: on an interface on a Cisco router there is a rather high number of "unknown protocol drops". What protocol could that be?! Is it some type of hack attempt? Ambitious if they are shaping there own raw packets onto the wire. But, no, the explanation is the much less exciting, and typical, lazy ape kind of error.

  5 minute input rate 2,586,000 bits/sec, 652 packets/sec
  5 minute output rate 2,079,000 bits/sec, 691 packets/sec
     366,895,050 packets input, 3,977,644,910 bytes
     Received 15,91,926 broadcasts (11,358 IP multicasts)
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog
     0 input packets with dribble condition detected
     401,139,438 packets output, 2,385,281,473 bytes, 0 underruns
     0 output errors, 0 collisions, 3 interface resets
     97,481 unknown protocol drops  <<<<<<<<<<<<<<
     0 babbles, 0 late collision, 0 deferred

This is probably the result of CDP (Cisco Discovery Protocol) being enabled on one interface on the network and disabled in this interface. CDP is the unknown protocol. CDP is a proprietary Data Link layer protocol, that if enabled, sends an announcement out the interface every 60 seconds. If the receiving end gets the CDP packet and has "no cdp enable" in the interface configuration - those announcements count as "unknown protocol drops". The solution is to make the CDP settings, enabled or disabled, consistent on every device in the interface's scope.

by whitemice at February 03, 2017 06:32 PM

Screen Capture & Recording in GNOME3

GNOME3, aka GNOME Shell, provides a comprehensive set of hot-keys for capturing images from your screen as well as recording your desktop session. These tools are priceless for producing documentation and reporting bugs; recording your interaction with an application is much easier than describing it.

  • Alt + Print Screen : Capture the current window to a file
  • Ctrl + Alt + Print Screen : Capture the current window to the cut/paste buffer
  • Shift + Print Screen : Capture a selected region of the screen to a file
  • Ctrl + Shift + Print Screen : Capture a selected region of the screen to the cut/paste buffer
  • Print Screen : Capture the entire screen to a file
  • Ctrl + Print Screen : Capture the entire screen to the cut/paste buffer
  • Ctrl + Alt + Shift + R : Toggle screencast recording on and off.

Recorded video is in WebM format (VP8 codec, 25fps). Videos are saved to the ~/Videos folder and image files are saved in PNG format into the ~/Pictures folder. When screencast recording is enabled there will be a red recording indicator in the bottom right of the screen, this indicator will disappear one screencasting is toggled off again.

by whitemice at February 03, 2017 06:29 PM

Converting a QEMU Image to a VirtualBox VDI

I use VirtualBox for hosting virtual machines on my laptop and received a Windows 2008R2 server image from a consultant as a compressed QEMU image. So how to convert the QEMU image to a VirtualBox VDI image?

Step#1: Convert QEMU image to raw image.

Starting with the file WindowsServer1-compressed.img (size: 5,172,887,552)

Convert the QEMU image to a raw/dd image using the qemu-img utility.

emu-img convert  WindowsServer1-compressed.img  -O raw  WindowsServer1.raw

I now have the file WindowsServer1.raw (size: 21,474,836,480)

Step#2: Convert the RAW image into a VDI image using the VBoxManage tool.

VBoxManage convertfromraw WindowsServer1.raw --format vdi  WindowsServer1.vdi
Converting from raw image file="WindowsServer1.raw" to file="WindowsServer1.vdi"...
Creating dynamic image with size 21474836480 bytes (20480MB)...

This takes a few minutes, but finally I have the file WindowsServer1.vdi (size: 14,591,983,616)

Step#3: Compact the image

Smaller images a better! It is likely the image is already compact; however this also doubles as an integrity check.

VBoxManage modifyhd WindowsServer1.vdi --compact
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%

Sure enough the file is the same size as when we started (size: 14,591,983,616). Upside is the compact operation went through the entire image without any errors.

Step#4: Cleanup and make a working copy.

Now MAKE A COPY of that converted file and use that for testing. Set the original as immutable [chattr +i] to prevent that being used on accident. I do not want to waste time converting the original image again.

Throw away the intermediate raw image and compress the image we started with for archive purposes.

rm WindowsServer1.raw 
cp WindowsServer1.vdi WindowsServer1.SCRATCH.vdi 
sudo chattr +i WindowsServer1.vdi
bzip2 -9 WindowsServer1-compressed.img 

The files at the end:

File Size
WindowsServer1-compressed.img.bz2 5,102,043,940
WindowsServer1.SCRATCH.vdi 14,591,983,616
WindowsServer1.vdi 14,591,983,616

Step#5

Generate a new UUID for the scratch image. This is necessary anytime a disk image is duplicated. Otherwise you risk errors like "Cannot register the hard disk '/archive/WindowsServer1.SCRATCH.vdi' {6ac7b91f-51b6-4e61-aa25-8815703fb4d7} because a hard disk '/archive/WindowsServer1.vdi' with UUID {6ac7b91f-51b6-4e61-aa25-8815703fb4d7} already exists" as you move images around.

VBoxManage internalcommands sethduuid WindowsServer1.SCRATCH.vdi
UUID changed to: ab9aa5e0-45e9-43eb-b235-218b6341aca9

Generating a unique UUID guarantees that VirtualBox is aware that these are distinct disk images.

Versions: VirtualBox 5.1.12, QEMU Tools 2.6.2. On openSUSE LEAP 42.2 the qemu-img utility is provided by the qemu-img package.

by whitemice at February 03, 2017 02:36 PM

January 24, 2017

Whitemice Consulting

XFS, inodes, & imaxpct

Attempting to create a file on a large XFS filesystem - and it fails with an exception indicating insufficient space! There is available blocks - df says so. HUh? While, unlike traditional UNIX filesystems, XFS doesn't suffer from the boring old issue of "inode exhaustion" it does have inode limits - based on a percentage of the filesystem size.

linux-yu4c:~ # xfs_info /mnt
meta-data=/dev/sdb1              isize=256    agcount=4, agsize=15262188 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=61048752, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=29808, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0

The key is that "imaxpct" value. In this example inode's are limited to 25% of the filesystems capacity. That is a lot of inodes! But some tools and distributions may default that percentage to some much lower value - like 5% or 10% (for what reason I don't know). This value can be determined at filesystem creation time using the "-i maxpct=nn" option or adjusted later using the xfs_growfs command's "-m nn" command. So if you have an XFS filesystem with available capacity that is telling you it is full: check your "imaxpct" value, then grow the inode percentage limit.

by whitemice at January 24, 2017 07:59 PM

Changing FAT Labels

I use a lot of SD cards and USB thumb-drives; when plugging in these devices automount in /media as either the file-system label (if set) or some arbitrary thing like /media/disk46. So how can one modify or set the label on an existing FAT filesystem? Easy as:

mlabel -i /dev/mmcblk0p1 -s ::WMMI06  
Volume has no label 
mlabel -i /dev/mmcblk0p1  ::WMMI06
mlabel -i /dev/mmcblk0p1 -s :: 
Volume label is WMMI06

mlabel -i /dev/sdb1 -s ::
Volume label is Cruzer
mlabel -i /dev/sdb1  ::DataCruzer
mlabel -i /dev/sdb1 -s ::
Volume label is DataCruzer (abbr=DATACRUZER )

mlabel is provided by the mtools package. Since we don't have a drive letter the "::" is used to defer to the actual device specified using the "-i" directive. The "-s" directive means show, otherwise the command attempts to set the label to the value immediately following (no whitespace!) the drive designation [default behavior is to set, not show].

by whitemice at January 24, 2017 07:51 PM

Deduplicating with group_by, func.min, and having

You have a text file with four million records and you want to load this data into a table in an SQLite database. But some of these records are duplicates (based on certain fields) and the file is not ordered. Due to the size of the data loading the entire file into memory doesn't work very well. And due to the number of records doing a check-at-insert when loading the data is also prohibitively slow. But what does work pretty well is just to load all the data and then deduplicate it. Having an auto-increment record id is what makes this possible.

class VendorSKU(scratch_base):
    __tablename__ = 'sku'
    id      = Column(Integer, primary_key=True, autoincrement=True)
...

Once all the data gets loaded into the table the deduplication is straight-forward using minimum and group by.

query = scratch.query(
    func.min( VendorCross.id ),
    VendorCross.sku,
    VendorCross.oem,
    VendorCross.part ).filter(VendorCross.source == source).group_by(
        VendorCross.sku,
        VendorCross.oem,
        VendorCross.part ).having(
            func.count(VendorCross.id) > 1 )
counter = 0
for (id, sku, oem, part, ) in query.all( ):
    counter += 1
    scratch.query(VendorCross).filter(
        and_(
            VendorCross.source == source, 
            VendorCross.sku == sku,
            VendorCross.oem == oem,
            VendorCross.part == part,
            VendorCross.id != id ) ).delete( )
    if not (counter % 1000):
        # Commit every 1,000 records, SQLite does not like big transactions
        scratch.commit()
scratch.commit()

This incantation removes all the records from each group except for the one with the lowest id. The trick for good performance is to batch many deletes into each transaction - only commit every so many [in this case 1,000] groups processed; just also remember to commit at the end to catch the deletes from the last iteration.

by whitemice at January 24, 2017 07:45 PM

AIX Printer Migration

There are few things in IT more utterly and completely baffling than the AIX printer subsystem.  While powerful it accomplishes its task with more arcane syntax and scattered settings files than anything else I have encountered. So the day inevitably comes when you face the daunting task of copying/recreating several hundred print queues from some tired old RS/6000 we'll refer to as OLDHOST to a shiny new pSeries known here as NEWHOST.  [Did you know the bar Stellas in downtown Grand Rapids has more than 200 varieties of whiskey on their menu?  If you've dealt with AIX's printing subsystem you will understand the relevance.] To add to this Sisyphean task the configuration of those printers have been tweaked, twiddled and massaged individually for years - so that rules out the wonderful possibility of saying to some IT minion "make all these printers, set all the settings exactly the same" [thus convincing the poor sod to seek alternate employment, possibly as a bar-tender at the aforementioned Stellas].

Aside: Does IBM really truly not provide a migration technique?  No. Seriously, yeah. 

But I now present to you the following incantation [to use at your own risk]:

scp root@OLDHOST:/etc/qconfig /etc/qconfig
stopsrc -cg spooler
startsrc -g spooler
rsync --recursive --owner --group --perms \
  root@OLDHOST:/var/spool/lpd/pio/@local/custom/ \
  /var/spool/lpd/pio/@local/custom/
rsync --recursive --owner --group --perms  \
  root@OLDHOST:/var/spool/lpd/pio/@local/dev/ \
  /var/spool/lpd/pio/@local/dev/
rsync --recursive --owner --group --perms  \
  root@OLDHOST:/var/spool/lpd/pio/@local/ddi/ \
  /var/spool/lpd/pio/@local/ddi/
chmod 664 /var/spool/lpd/pio/@local/ddi/*
chmod 664 /var/spool/lpd/pio/@local/custom/*
enq -d
cd  /var/spool/lpd/pio/@local/custom
for FILE in `ls`
 do
   /usr/lib/lpd/pio/etc/piodigest $FILE 
 done
chown root:printq /var/spool/lpd/pio/@local/custom/*
chown root:printq /var/spool/lpd/pio/@local/ddi/*
chmod 664 /var/spool/lpd/pio/@local/ddi/*
chmod 664 /var/spool/lpd/pio/@local/custom/*

Execute this sequence on NEWHOST and the print queues and their configurations will be "migrated". 

NOTE#1: This depends on all those print queues being network attached printers.  If the system has direct attached printers that correspond to devices such as concentrators, lion boxes, serial ports, SCSI buses,.... then please do not do this, you are on your own.  Do not call me, we never talked about this.

NOTE#2: This will work once.  If you've then made changes to printer configuration or added/removed printers do not do it again.  If you want to do it again first delete ALL the printers on NEWHOST.  Then reboot, just to be safe.  At least stop and start the spooler service after deleting ALL the printer queues.

NOTE#3: I do not endorse, warranty, or stand behind this method of printer queue migration.  It is probably a bad idea.  But the entire printing subsystem in AIX is a bad idea, sooo.... If this does not work do not call me; we never talked about this.

by whitemice at January 24, 2017 11:46 AM

The source files could not be found.

I have several Windows 2012 VMs in a cloud environment and discovered I am unable to install certain roles / features. Attempting to do so fails with an "The source files could not be found." error. This somewhat misleading messages indicates Windows is looking for the OS install media. Most of the solutions on the Interwebz for working around this error describe how to set the server with an alternate path to the install media ... problem being that these VMs were created from a pre-activated OVF image and there is no install media available from the cloud's library.

Lacking install media the best solution is to set the server to skip the install media and grab the files from Windows Update.

  1. Run "gpedit.msc"
  2. "Local Computer Policy"
  3. "Administrative Templates"
  4. "System"
  5. Enable "Specify settings for optional component installation and component repair"
  6. Check the "Contact Windows Update directory to download repair content instead of Windows Server Update Services (WSUS)"

Due to technical limitations WSUS cannot be utilized for this purpose; which is sad given that there is a WSUS server sitting in the same cloud. :(

by whitemice at January 24, 2017 11:31 AM

January 05, 2017

As it were ...

A Grand Experiment

Well, it’s time for a new job. “What?!?!” you ask. “Didn’t you just get a new job a few months ago?”

Indeed I did. This last August I ended my time with Pippin and moved to Modern Tribe. For a variety of reasons it didn’t work out. No-one’s upset, I still love and respect them, they still like me, it just wasn’t what either of us expected.

So, on to the future.

The plan at this point is to merge my experience as a freelancer with Tanner Moushey’s company and his experience as a freelancer and form a new WordPress agency. We’re doing a short trial period first, just to make sure this is really what we want, but by summer we should have a new company brand etc.

The General Plan

Our goal is freedom, both for ourselves and the people who work for us. This means not being married to the job, or making the job super complicated. We’d like to stay small and flexible, and do relatively small projects. We’re not looking to be a VIP agency or anything.

How You Can Help

If you need any web dev help, let me know. Tell your friends etc. I’m back to taking contracts. This time we’re a team though, which makes for a lot more depth, stability, and security.

This feels so so good, the best I’ve felt about a job since the first time I went 100% freelance.

Thank you for your support.

The post A Grand Experiment appeared first on As it were....

by topher at January 05, 2017 05:22 PM

November 02, 2016

As it were ...

Building a custom Google Map

For about a year now I’ve had a Google map on HeroPress.com showing pins of where my contributors are from. I’ve been using Maps Builder Pro from WordImpress. It’s an excellent plugin, and does many of the things I wanted, but not all of them. Here’s what I was after:

My contributors are a custom content type in WordPress, not just authors. Maps Builder Pro provides a search box in the admin of each contributor to search for a location on Google Maps. Then I simply click the location and it fills in a bunch of meta boxes with data like coordinates, city name, and some unique location data.

I wanted a plugin that would automatically go get all that data, organize people by location, grouping people who are from the same location, and put in one pin per location, with the bubbles showing all the people from that location.

The map I made with Maps Builder Pro let me do most of this, but manually.  I had to keep the map up to date each week, and I was terrible at that.

So I wanted a new plugin, but I dearly love the admin UI for gathering and storing data that Maps Builder Pro provides. So that plugin remains, and I’ll use it that way. I built a new plugin for rendering the map with my requirements.

What I learned

I started with a tutorial by a guy named Ian Wright. It’s excellent, as are all of his maps tutorials. I highly recommend them.

Data Organization

The pins and the contents of the pins are two different data sets in Javascript, and they’re related by order. So pin 1 pairs with content block 1, and pin 42 goes with content block 42.  This means you need to have a content block for every pin, even if it’s empty, so that the 42’s match up properly.

Bounding

Ian’s tutorial uses bounding to set the zoom and center for the map. I didn’t understand that, so when I tried to change it, I failed terribly. Here’s what that all means.

When creating a pin we put in

bounds.extend(position);

which tells the map object the bounds of the pins on a map. Then we put in

map.fitBounds(bounds);

which tells the map to zoom just the right amount so you can see all the pins, and center on the middle of them. This made it so that when I later tried to make a different center with setCenter() it didn’t work.

Additionally, when I removed the fitBounds() function the whole map broke. This is because you MUST use some sort of centering code, and I had neither fitBounds() nor setCenter().

The key was to have a setCenter() and NOT have a fitBounds(). Then I was able to easily have a setZoom as well.

Static Maps

I just found out that you can have the maps API return an image rather than an interactive map.  So you can programmatically make the map, but it loads as fast as an image.  If you don’t need interactivity then it’s a MUCH better way to go.  I’m thinking of putting a small map on each contributor’s page with a single pin, showing where they’re from. It would then link to a google map.

In Summary

I’ve heard a fair number of people whine about how terrible the Google Maps API is, but I really like it.  I don’t know Javascript, and I was able to easily adapt some tutorial code, read the docs to extend it, and make something really slick. I really recommend it.

The post Building a custom Google Map appeared first on As it were....

by topher at November 02, 2016 02:44 AM

October 16, 2016

As it were ...

The Right Stuff

Recently a friend started working on a WordPress plugin. The plugin was scratching an itch, counting the words in a collection of posts and rendering the count in a widget, as an incentive to post regularly. In the process of building the plugin she tweeted quite a bit, about successes, struggles, and frustrations. At one point I sent her some encouragement:

She was right, I hadn’t seen her code.  I’d never seen any of her code. At that point I didn’t know if she could code at all. But I knew she was doing awesome. How?

I could tell from her tweets that she was struggling with things, doing research, overcoming those things, and moving on. Anyone who can complete that process is essentially unstoppable as a developer. That process also works in any other walk of life.

Do you have what it takes to be a WordPress developer? Or any kind of developer? Or anything else in life? If you can confront your struggles head on, find a solution, and move on, you will be unstoppable.

The post The Right Stuff appeared first on As it were....

by topher at October 16, 2016 10:56 PM

October 03, 2016

Whitemice Consulting

Playing With Drive Images

I purchased a copy of Windows 10 on a USB thumbdrive. I chose to have media to have (a) a backup and (b) not to have to bother with downloading a massive image. Primarily this copy of Windows will be used in VirtualBox for testing, using Power Shell, and other tedious system administrivia. First thing when it arrived is I used dd to make a full image of thumbdrive so I could tuck it away in a safe place.

dd if=/dev/sde of=Windows10.Thumbdrive.20160918.dd bs=512

But now the trick is to take that raw image and convert it to a VMDK so that it can be attached to a virtual machine. The VBoxManage command provides this functionality:

VBoxManage internalcommands createrawvmdk -filename Windows10.vmdk -rawdisk Windows10.Thumbdrive.20160918.dd

Now I have a VMDK file. If you do this you will notice the VMDK file is small - it is essentially a pointer to the disk image; the purpose of the VMDK is to provide the meta-data necessary to make the hypervisor (in this case VirtualBox) happy. Upshot of that is that you cannot delete the dd image as it is part of your VMDK.

Note that this dd file is a complete disk image; including the partition table:

awilliam@beast01:/vms/ISOs> /usr/sbin/fdisk -l Windows10.Thumbdrive.20160918.dd
Disk Windows10.Thumbdrive.20160918.dd: 14.4 GiB, 15502147584 bytes, 30277632 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device                            Boot Start      End  Sectors  Size Id Type
Windows10.Thumbdrive.20160918.dd 1 *     2048 30277631 30275584 14.4G  c W95 FAT3

So if I wanted to mount that partition on the host operating system I can do that my calculating the offset and mounting through loopback. The offset to the start of the partition within the drive image is the start multiplied by the sector size: 512 * 2,048 = 1048576. The mount command provides support for offset mounting:

beast01:/vms/ISOs $ sudo mount -o loop,ro,offset=1048576 Windows10.Thumbdrive.20160918.dd /mnt
beast01:/vms/ISOs # ls /mnt
83561421-11f5-4e09-8a59-933aks71366.ini  boot     bootmgr.efi  setup.exe                  x64
autorun.inf                              bootmgr  efi          System Volume Information  x86
beast01:/vms/ISOs $ sudo umount /mnt

If all I wanted was the partition, and not the drive, the same offset logic could be used to lift the partition out of the image into a distinct file:

dd if=Windows10.Thumbdrive.20160918.dd of=Windows10.image bs=512 skip=2048

The "Windows10.image" file could be mounted via loopback without bothering with an offset. It might however be more difficult to get a virtual host to boot from a FAT partition that does not have a partition table.

by whitemice at October 03, 2016 10:43 AM

September 15, 2016

Whitemice Consulting

Some Informix DATETIME/INTERVAL Tips

Determine the DATE of the first day of the current week.

(SELECT TODAY - (WEEKDAY(TODAY)) UNITS DAY FROM systables WHERE tabid=1)

Informix always treats Sunday as day 0 of the week. The WEEKDAY function returns the number of the day of the week as a value of 0 - 6 so subtracting the weekday from current day (TODAY) returns the DATE value of Sunday of the current week.

Determining HOURS between two DATETIME values.

It is all about the INTERVAL data type and its rather odd syntax.

SELECT mpr.person_id, mpr.cn_name, 
  ((SUM(out_time - in_time))::INTERVAL HOUR(9) TO HOUR) AS hours
FROM service_time_card stc
  INNER JOIN morrisonpersonr mpr ON (mpr.person_id = stc.technician_id)
WHERE mpr.person_id IN (SELECT person_id FROM branch_membership WHERE branch_code = 'TSC')
  AND in_time > (SELECT TODAY - (WEEKDAY(TODAY)) UNITS DAY FROM systables WHERE tabid=1)  
GROUP BY 1,2

The "(9)" part of the expression INTERVAL HOUR(9) TO HOUR is key - it allocates lots of room for hours, otherwise any value of more than a trivial number of hours will cause the clearly correct by not helpful SQL -1265 error: "Overflow occurred on a datetime or interval operation". As, in my case I had a highest value of 6,483 hours I needed at least HOUR(4) TO HOUR to avoid the overflow error. HOUR(9) is the maximum - an expression of HOUR(10) results in an unhelpful generic SQL -201: "A syntax error has occurred.". On the other hand HOURS(9) is 114,155 years and some change, so... it is doubtful that is going to be a problem in most applications.

by whitemice at September 15, 2016 07:46 PM

August 08, 2016

As it were ...

July 28, 2016

As it were ...

A new job at Modern Tribe

I’m happy to announce that today is my last day at Sandhills Development (working on Easy Digital Downloads with Pippin), and that Monday will be my first day at Modern Tribe. In this post I hope to answer a few common questions.  🙂

Why? Didn’t you just get a new job?

I’ve been with Pippin for just over a year. I joined his team to be doing things other than development, things like documentation, community involvement etc. At the time I was coming off the high of promoting HeroPress and having a wonderful time Not Developing.

As it turns out, developing is what really excites me in my career. I simply never fell in love with writing docs the way I thought I would.

I’d like to be clear that Pippin is a wonderful boss and his company is a spectacular place to work. We’re parting on very good terms.

What will you be doing at Modern Tribe?

I applied for the job of Lead Developer. I’m not going to drop into that position immediately, that would be foolish until I know the culture and processes better. I’ll be doing whatever they tell me to do. 🙂 Developer is my primary purpose for being there though.

Still going to do HeroPress?

Yep, I do that in my spare time, and Modern Tribe doesn’t have a problem with that. Personally, many of the folks there are big fans of HeroPress.

Additionally, some exciting things are happening around the idea of expanding HeroPress a bit, more on that later.

 

The post A new job at Modern Tribe appeared first on As it were....

by topher at July 28, 2016 11:44 AM

July 23, 2016

As it were ...

Dragons fly

image

image

Caught this little guy on the edge of the grill the other night.

The post Dragons fly appeared first on As it were....

by topher at July 23, 2016 09:12 PM

July 17, 2016

As it were ...

My Birth Day

In the months before my dad died we went through a lot of Stuff. Some of it was his, some was my moms, some from his parents and in-laws.

One of the boxes he showed me held a bunch of diaries from my maternal Grandmother. I never knew they existed, so I started looking through them until I came to 1971. I slowly flipped through until I came to July 17.  Here’s what I found:

It’s a treasure for me to be able to see her handwriting again, to read what she had to say to us, about me.

The post My Birth Day appeared first on As it were....

by topher at July 17, 2016 04:00 AM

July 03, 2016

As it were ...

Honey bee on Lavendar

image

The honey bees are really loving our Lavendar this year.

image

I managed to catch this one on mid flight.

The post Honey bee on Lavendar appeared first on As it were....

by topher at July 03, 2016 01:54 PM