Monday, May 30, 2022

Starship Designer Terms of Service

Terms of Service

Welcome, Starship Designer User

Terms of Service
The following terms and conditions govern all use of the App Starship Designer. For purposes of these Terms of Service (“TOS”), the “Starship Designer Platform” is defined to include (i) the App Server Environment, (ii) the App Development Environment, and (iii) any Starship designs created these other aspects of the Starship Design Platform. As used herein, the Starship Design Platform is defined to include these three parts as well as all components thereof and all updates, patches, fixes, modifications and enhancements thereto, including releases of new versions, whether provided to you via download, automatically without additional consent or action on your part or otherwise, and any and all accompanying documentation, files and materials. The Starship Design Platform is owned and operated on a hobby basis by David L. Dawes operating as VirtualSoundNW. References to I, Me, Myself all refer to David L. Dawes.


The Starship Design Platform is offered subject to your acceptance without modification of all of the terms and conditions contained herein and all other operating rules, policies and procedures that may be published from time to time on this terms of service page (collectively, the “Agreement”).


Maintaining Accounts
We may provide storage for your designs but if so it may only be available for an intermittent period and should not be counted on; please make copies of anything you create that you wish to save. Uninstalling the app will generally remove our copies of your designs.

Responsibility of Users/Contributors
If you make (or allow any third party to make) designs available by any means, you are entirely responsible for the content of, and any harm resulting from, that Content. In particular please do not steal and then distribute other folk's designs without permission.

By making Content available, you represent and warrant that:

the downloading, copying and use of the Content will not infringe the proprietary rights, including but not limited to the copyright, patent, trademark or trade secret rights, of any third party;

if your employer has rights to intellectual property you create, you have either (i) received permission from your employer to post or make available the Content, or (ii) secured from your employer a waiver as to all rights in or to the Content;

you have fully complied with any third-party licenses relating to the Content, and have done all things necessary to successfully pass through to end users any required terms;

the Content is not unlawful, harmful, threatening, abusive, tortious, defamatory, libelous, vulgar, obscene, child-pornographic, lewd, profane, invasive of another’s privacy, hateful, or racially, ethnically or otherwise objectionable;

License Grant
Starship Designer is licensed under the Open Gaming License and the source code is available on github


Subject to your agreement and compliance with the terms and conditions of this TOS, we grant to each user a limited, personal, non-exclusive right to use the App and the Starship designs they create as they see fit, respecting copyright laws. Any copies are the responsibility of the user, so please do not copy copyrighted works.


Changes
We reserve the right to modify or replace any part of these TOS. It is your responsibility to check these TOS periodically for changes. Your continued use of or access to the App Press Platform following the posting of any changes to these TOS constitutes acceptance of those changes.


Data Collection and Use
We collect your email address to identify you and store your data separately. We may email you from time to time (less than every 30 days) with offers or upgrade notices. We will not sell, transfer, export or give access to your information or any portion of it to any other entity. 


Termination
We may be forced to terminate your license (some copyright holders are litigious and as a hobby pursuit, I will fold at the first threat of legal action, probably). If forced to I will terminate this license. If I lose interest and stop supporting it, the license continues as is but no support is available. You can ,modify the source yourself or attempt to get it done by contract, but I offer no guarantee that this sill be possibe.


Disclaimer of Warranties
THERE ARE NO REPRESENTATIONS OR WARRANTIES THAT APPLY OR THAT ARE MADE TO YOU IN ANY WAY IN CONNECTION WITH THE APP PRESS DEVELOPMENT ENVIRONMENT OR THESE TOS. TO THE MAXIMUM EXTENT PERMITTED BY LAW, WE DISCLAIM ALL REPRESENTATIONS AND WARRANTIES WITH RESPECT TO THE APP PRESS DEVELOPMENT ENVIRONMENT AND YOUR ACCESS TO AND USE THEREOF, WHETHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE OR ANY WARRANTIES OF TITLE, NON-INFRINGEMENT AND/OR ARISING FROM A COURSE OF DEALING OR USAGE OF TRADE.

WITHOUT LIMITING THE GENERALITY OF THE FOREGOING, THE STARSHIP DESIGNEER IS MADE AVAILABLE TO YOU ON AN “AS IS” AND “AS AVAILABLE” BASIS AND WE DO NOT GUARANTEE, WARRANT OR REPRESENT THAT THE APP  SHALL MEET YOUR REQUIREMENTS OR THAT YOUR USE, OPERATION OR RESULTS OF USE SHALL BE UNINTERRUPTED, COMPLETE, RELIABLE, ACCURATE, CURRENT, ERROR-FREE OR OTHERWISE SECURE. YOU ASSUME THE ENTIRE RISK OF DOWNLOADING, INSTALLING, COPYING, OPERATING, USING AND/OR DISTRIBUTING THE APP PRESS PLATFORM AND/OR APP PRESS DEVELOPMENT ENVIRONMENT.

Limitations on Liability
IN NO EVENT I BE LIABLE TO YOU OR ANY THIRD PARTY FOR ANY SPECIAL, INCIDENTAL, INDIRECT, CONSEQUENTIAL OR PUNITIVE DAMAGES WHATSOEVER, INCLUDING THOSE RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER OR NOT FORESEEABLE.

THIS IS A FREE APP AND I HAVE NO AGGREGATE LIABILITY UNDER OR IN CONNECTION WITH THIS AGREEMENT. YOU PAID NOTHING AND I AM OBLIGATED FOR NOTHING.


THE LIMITATIONS AND EXCLUSIONS IN THIS SECTION APPLY TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW IN YOUR JURISDICTION. SOME JURISDICTIONS PROHIBIT THE EXCLUSION OR LIMITATION OF LIABILITY FOR INCIDENTAL, CONSEQUENTIAL OR PUNITIVE DAMAGES. ACCORDINGLY, THE LIMITATIONS AND EXCLUSIONS SET FORTH ABOVE MAY NOT APPLY TO YOU.

General Representation and Warranty
You represent and warrant that (i) your use of the Starship Designer will be in strict accordance with all applicable laws and regulations (including without limitation any local laws or regulations in your country, state, city, or other governmental area, regarding online conduct and acceptable content, and including all applicable laws regarding the transmission of technical data exported from the United States or the country in which you reside) and (ii) your use of the Starship Designer will not infringe or misappropriate the intellectual property rights of any third party.

Governing Law and Dispute Resolution
All matters relating to these TOS and your access to, or use of, the Starship Designer shall be governed by the Laws of the State of Washington, United States of America without regard to conflict of laws principles thereof. You agree that any claim or dispute you may have against me, including any claim or dispute relating to these TOS, to the Starship Designer, must be resolved by a State or Federal court located in King County, Washington, United States of America. You agree to submit to the personal jurisdiction of the courts located in King County, Washington, United States of America for the purpose of litigating any such claims or disputes or any claims or disputes that I may have against you. The parties specifically disclaim the U.N. Convention on Contracts for the International Sale of Goods.

General Terms
No amendment, modification, waiver or discharge of any provision of these TOS shall be valid unless made in writing and signed by an me. No failure or delay by me to exercise any right or enforce any obligation shall impair or be construed as a waiver or on-going waiver of that or any or other right or power, unless made in writing and signed by me. These TOS constitute the entire agreement between me and you with respect to your access to or use of the Starship Designer and supersede any prior agreements between you and I on such subject matter. You may not assign or otherwise transfer these TOS, or any right granted hereunder, without my written consent. My rights under these TOS are freely transferable by myself. If any provision of these TOS is held to be illegal, invalid or unenforceable, the remaining provisions of these TOS shall be unimpaired and remain in full force and effect. These TOS will be binding upon and will inure to the benefit of the parties, their successors and permitted assigns.

© 2020 David L. Dawes

Starship Designer Policy Statement

Starship Designer (C) 2022 VirtualSoundNW and David L. Dawes

Source code for Starship Designer is available on github and is licensed using the Open Gaming License.

Starship Designer uses your google email to keep track of your Starship designs. If I can get features added it will also use your Google identity so you can export designs as spreadsheets and/or docs using Google's apps, No such features are available now, feel free to add them and give me a PR if you have the skills.

We may email you notices of upgrade availability on occasion but otherwise we will not use or sell your information to any third parties.

We use Google's identity and Google's open source code for the same as our basis for identity, so in addition to the risk I messed something up you also inherit the risk that Google did as well. Google is much less likely to mess up of course but this product and its software are offered as-is and have no guarantee of proper operation or fitness for any particular purpose. In fact I can guarantee it will still have bugs - free software does not pay enough to spend enough time fixing everything.

Note that VirtualSoundNW is a hobby level activity - I am releasing the source code in the public domain and not charging anything for anything. As such I can pretty much guarantee the software WILL have bugs that I do not have the time nor budget to fix.

You could always contact me and offer to get changes or bug fixes on a contract basis - and I'm even happy to sign over copyrights to new material and such for appropriate compensation, so feel free to get in touch - but that seems unlikely at best.

Otherwise enjoy the app if you can. Thanks!

Saturday, June 17, 2017

CI/CD and Second Order Test Concerns

Cisco has some reasonably mature media products (phone and video) built using the microservices approach with Continuous Integration/Continuous Delivery and plenty of automated testing. As our products matured the nature of the challenges we faced changed: we were faced with second order test effects. The first order effect of the tests is to test our production source code, catching bugs and increasing the production code's quality. The second order of effect is the increasing overhead of designing, building, operating, modifying, cleaning up and eliminating automated tests. As the total number of tests increases both performance and reliability of the tests will become critical to your ability to turn the CI/CD crank on each new change. To make life interesting, we have a world of great techniques we use to improve our production code and apply almost none of it to our test code.
Cisco's agile process use a fairly rigidly defined "definition of done" with a long list of requirements. It's somewhat a pain, but it did indeed yield code that had appropriate unit, sanity, regression, integration, feature, system, load, performance and soak tests. Code was always fairly modular due to a hard cyclomatic complexity requirement and we used all the latest bug scanning tools and so forth. Coverage was kept high, and we got large benefits from the careful and frequent testing.
This allowed us to deliver changes and features much quicker at first. We each built our handful of microservices and their little universes of tests, then added tests for the microservices we depended on. Every time new features are added, multiple new automated tests of various sorts are needed. As time passes and you grow features in an agile manner you end up with dependencies on more and more microservices, and you only have to get burned a couple times to realize you need to add tests the verify the features of other microservices that you rely on do indeed work. This leads to fuzzier lines of responsibilities, reinvented test approaches without best practices and hard to maintain tests. Communication across teams helps but is time consuming.
Every time a customer issue is fixed a regression test is added. Tests accumulate, and when a large organization is applying thousands of developers to building new interdependent microservices, the tests multiply at an amazing rate.
Like anything, writing good tests takes time to learn and master. Since the production code is the actual shipping item, much less time is spent revisiting tests, cleaning them up, making them modular and less complex. Get the code looking good, get the test working (it does not have to look good) and check it all in. This also means you're slower to master the test coding process - it's lower priority than the features, since features get your team those critical velocity points.
Given the requirements, maximizing velocity requires skimping on testing and mostly leaving them in the moderately functional state, not the desirable well tested and cleaned up state that increases quality and maintainability. Production code coverage is checked, test code coverage itself is never looked at. Production code is measured for cyclomatic complexity and rejected if it isn't fairly simple, but that is not done with test code. No automated bug checkers for test code!
Over time you get some sweet microservices providing awesomely scalable, performant and reliable features in a manner that simply can't be done in an old school behemoth solution. The pattern works extremely well, but it is also accumulates a huge amount of technical debt: the test code turns into a world of hurt. This is the most painful second order test concern of CI/CD systems that I've seen. Focus on production code over test gets increasingly expensive over time and especially as you scale the number of contributors up.
Just as we are mastering the architecture and the approach and delivering new features and bug fixes at a rapid pace, as our "velocity" starts peaking (boy, did Cisco go on about velocity) the tests accumulate huge amounts of poorly designed monolithic non-modular error and breakage prone code.
Our CI/CD systems refuse to integrate if we fail the tests. The first wave of pain was when scale started increasing massively and performance (as expected) dropped a bit. All was comfortably within expectations, but a few tests would break due to poor design and timing dependencies. Occasional code submissions failed to go through because a test that had nothing to do with your code failed; having never seen that test code, you have no idea why. Rather than check carefully, you immediately rerun the test. If it passes this time given that it has nothing to do with your code, the temptation is pretty much overwhelming to ignore it: try best 2 of 3, if it passes, it's in! While this is an insidious practice, the nature of timing dependencies in tests is that they are intermittent. If it fails too frequently then the team responsible for it will notice and fix it; if it always fails the team responsible will be found and told to fix it so that code can be promoted. This is the sort of situation that gets you to switch off tests so you can promote a change. If you find yourself switching off tests then you're probably not spending enough time maintaining your test code.
Now there are thousands of tests, and on top of the random test failures, the tests themselves start taking longer and longer, pretty quickly to an unacceptable degree. Time now has to be spent going back and sorting out tests to run very frequently vs. occasionally vs. rarely to get appropriate performance out of the different phases of the test suites without losing the coverage and quality benefits. Cisco's product managers weren't about to assign user stories to us for things they didn't care about and didn't feel responsible for, so the problem would fester until enough engineers on enough different teams were complaining about it that it finally percolated a few levels up and some VP had to step in and re-purpose efforts, assembling a team across the groups with the offending test suites to spend a week or two cleaning up. After the slow downward creep in velocity caused by the problem, velocities drop even further as teams change focus and lose members temporarily, and executives are unhappy.
Pretty soon the occasional build failure is a reliable build failure, sometimes with 2 or 3 random cases failing. Once again, no scrum team is in a position to address all of the issues, we haven't noticed our own intermittently failing tests blocking us (or if we do, we fix that one), we just get stuck by everyone else's blocking us. Note that this is an evil networking effect: the bigger you are, the worse the problem any given level of unreliable tests will cause you, and it goes up faster than linearly, I'm pretty sure. At companies as large as Cisco this becomes a large concern.
Once again it waits for a VP to crack the whip, and teams get raided,and velocity again drops, and executives are again annoyed, and Cisco kicks off another round of layoffs. Not that the problems caused the layoffs, mind you, they were just a regular feature of Cisco life, but I digress.
The main simple rule of thumb I learned at Cisco doing CI/CD is that done right in a large and mature microservices cloud, you spend quite a bit more time coding and maintaining all the different test cases for all the different types of testing than you do coding the actual production code to be tested.

Wednesday, May 24, 2017

PhantomJS, CasperJS and Arrays of Functions

Scraping

I've been doing some scraping - writing apps that fetch HTML content using HTTP GET and the occasional POST.

I've found two reasonably nice solutions for making scrapers easily:

  1. Scrapy - a Python framework, optional Splash server available if full browser implementation (especially Javascript) is needed.
  2. CasperJS - a Javascript framework built on PhantomJS, a headless browser.


One recent accomplishment has to do with downloading files from a web site, but I'm under non-disclosure and can't talk about that. Dang.

That work was done in CasperJS, which has an interesting approach to defining and executing scrapers and spiders.

CasperJS

CasperJS handles PhantomJS "under the covers" and provides nice wrappers around important features like injecting Javascript code into the browser and waiting on DOM elements, not to mention inputting keystrokes and mouse clicks.

Functions Inside Functions

Using CasperJS, you create a Casper object then start it with an initial URL, which is requested husing HTTP(S) via the PhantomJS headless browser.

Instead of directly coding the spider or scraper, you define a series of CasperJS steps using casper.then() (or inside of a casper object use this.then()). Each definition is a function:
casper.then(function doSomething() {
    this.wait(250);
});

These functions are added to an array of function definitions and are not immediately run. When you get done defining them, you do a casper.run() {} and the functions will be invoked in order (maybe, see the bypass() function).

Functions frequently add new functions to the list, so you can be executing step 3 out of 4, and when the step complete you are now executing step 4 out of 7.

You can add logic that skips forward or backward through the array of functions, allowing loops and optional steps.

Most everything is asynchronous, which can bite you. If you code this.wait(500) and this.wait(500), they both run asynchronously after the last active bit of this step completes, and finish at the same time. They do not add additonal delay to each other at all if they are in the same .then().

The approach of adding functions everywhere for everything can lead to an accumulation of anonymous functions. This is actually a bad idea, since the debug/log mechanisms available will report the function names being processed - if they exist. It's best to add a unique function name to each and every function:
this.then(function check4errors() {
    var errorsFound = false;
    if (verbose) {
        this.echo('Check for errors');
    }


Be careful, though. There are also tight requirements around the casper.waitFor()/this.waitFor() and casper.waitUntil()/this.waitUntil() methods provided by CasperJS. The successful case has to be named function then() and the timeout case has to be named onTimeout() or things simply do not work. Here's an example of correct coding within a CasperJS object, so "this" is used rather than casper:
this.waitFor(function check() {
    return this.evaluate(function rptcheck() {
        return ((document.querySelectorAll('#reports\\:genRpt').length > 0) && 
                (document.querySelectorAll('#_idJsp1\\:data\\:0\\:genRpt').onclick.toString().length > 0));
    });
}, function then() {
    if (verbose) {
        this.echo('Found report generation button.');
        this.capture('ButtonFound.png');
    }
    this.wait(100);
}, function onTimeout() {
    this.echo('Timed out waiting for report generation button.');
    this.capture('NoButtonFound.png');
}, 20000);

Pluses and Minuses

CasperJS/PhantomJS are different than most NodeJS apps, so integrating them (other than via command line execution) is complicated. Writing NodeJS wrappers for CasperJS scrapers is straightforward. This is how I solved the challenges that I can;t talk about due to non-disclosure.

Inability to easily mix NodeJS and CasperJS code is a minus, but not horrendous. The ease of injecting JS into the browser is a plus, and the consistent JS language for both our code and the injected code has benefits too. Linting tools work well with the CasperJS code, enforcing correct coding in the CasperJS and browser injected code at the same time.

Scrapy Splash

Scrapy is nice, and the Scrap/Splash combination looks like the best bet for truly large scale approaches. The tools allow you to run an extremely stateful back end process on a stateless server with the ScrapySplash adaptation layer handling the tricky state management bits for you. I got this working, but only to a limited degree. It looks like the best solution for a large scale professional approach, but it does have a higher barrier to entry with the separate back-end server and the adaptation layer on top of everything else.

I ended up not using this approach, so for now my opinions on Scrapy and ScrapySplash are not well informed. If I ever get a chance to use it for real I'll revisit this article.

Friday, May 19, 2017

Really Tiny Embedded Code

I did a couple of generations of control systems based on 8051 detivatives. These had a bit of ROM - 8K, usually - but only 128 bytes of RAM. The stack used that 128 bytes, and so did your variables. Pretty darn tight fit for interrupt driven code, which needs to use some stack.

I did the If VI Were IX guitar robots on the 8051. It handled MIDI input over the serial port in interrupts, and it had a timer driver interrupt that we used to both update our servo loop and also to derive the pulse chain used to control position on the pluckers - the shaft with a plastic pick or nib that plucks the string when rotated properly.

We put 2 on each shaft - as I said, a pick and a nub, and added support for a MIDI mode switch to select between the two. Based on the requested velocity in MIDI note on commands received from the serial port, we would set the PWM parameters higher (mostly on) for high velocities and lower (mostly off) for low velocities. To make the PWM average out and not be too jerky and noisy, we need to update it as fast as we possibly can. Bear in mind that our 8051 used a 4 MHz clock, and many instruction took more than 4 cycles, so we got less than 1 million instructions per second. Not much power for handling real time updates and asynchronous serial input while playing a guitar in real time.

(Old man ranting at lazy kids mode)
Micro-controller chips today usually have hardware PWM circuits, so we can just load a duty cycle number and provide a really fast MHz+ digital clock and we get a great PWM. Luxury! The 8051 I was using had no PWM hardware, so we implemented it in software using interrupts. Messier, less smooth, lots more code and a few variables in a system that had little room for either. We couldn't even get 1M instructions/sec.

Micro-controllers today also have more RAM - either built in, or external, they just don't make them with 128 bytes much any more. Luxury! (You're supposed to hear the Monty Python bit about having to walk to school uphill both ways, not like the lazy kids today; doesn't come across well in text). Clocks on modern micro-controllers are often in the gigahertz, a thousand or more times faster than the 8051, and also 32 bits wide, so each instruction handles and processes 4 times as much data as the old 8051s could.

So we had all of our local variables - assorted modes (damper tied to notes, or off to allow hammer-on and hammer-off, or on for a damped sound, etc), state (plucking has several states and we need requested and expected positions i order to include the error component in the feedback loop), limit details, channel details, note range details, and more. We also had to have enough left over in the 128 bytes to allow for the registers to be stored during an interrupt (MIDI IO) with enough room for an additional stack frame for an overlapping timer interrupt (servo and position updating).

We managed to squeeze it all in and it works fine. It helps that registers are only 8 bits and there aren't many of them, and the program counter (pushed onto the stack as the return address) is small too - not all that much needs to be pushed on the stack. The upside of little room is that you simply can't have code bloat and variables must be as simple as possible. The result is small enough that you can realistically squeeze all of the bugs out.

The If VI Were IX installation has never crashed due to software fault, and has outlived every single moving part in the system - MIDI sources had to be replaced with a solid state source, pluckers and strings replaced, yet the 8051 micro-controllers are still fine a decade later.

If I was doing this over again from scratch today, I'd probably base it on a Raspberry Pi system with gigabytes of memory and a flash hard drive with tens of gigabytes more. Luxury!

In My Day We Had to Grind Square Roots Out A Bit At A Time - If We Were Smart

I'm old for my industry, getting into my late fifties. I also started very young, with my first paying tech gig in 1976. I have programmed computers by throwing switches and liked using punched paper tapes because that was a step up.

My first paying gig was a square root. I met Chuck through Paul, who lived down the block from him.
Check was, like me, a bit of a math prodigy. He was working on a Z8000 based motion control system, and having problems with the square root calculations, needed to figure out distances and how fast to go on each axis. You know, square each individual component or axis, add them, and take the square root. The result is the total distance of the motion as a vector on those axes. Given the speed desired you can now work out how fast each individual axis should go to achieve the correct speed.

They were using the method of Newton, and it was painfully slow. Too slow. To do motion control in real time, you have to "get ahead" a bit. You start calculating when to take steps on each axis, but not doing it yet. Instead you build a queue and store up a fair number of steps before starting the first one. This allows you to have slower portions (setting up for the next motion) as long as they are fairly short and the longer bit is faster, using the queue to provide the data we are too slow to calculate. How slow you are in the worst case directly determines how big that queue needs to be. Memory wasn't what it is now, if we were lucky we had 64K bytes for everything. Nowadays that's not even big enough for a stack. That meant the queues were quite a bit smaller still.

The method of Newton: given an integer, determine it's square root by starting with an estimate (int/2, for example) and repeatedly:
    Divide the integer by the latest estimate
    Average the estimate and the result: New estimate = (latest estimate + division result)/2
Repeat with new estimate

This will converge on the correct result. Slowly. For example, square root of 2 has these estimates:
1
1.5
1.333
1.41666
1.411764...

5th estimate is off by about 1/3000 or so of the actual square root of 2.

Each round requires a divide, which is the slowest instruction on the Z8000. Really slow for the 64 bit divided by 32 bit numbers we were dealing with. The biggest queue they could make would empty after 2 or 3 fairly short motions, used up by the divides in the square root calculations. The solution didn't work.

Chuck mentioned the difficulties to me and I got curious. Six years prior in grade school they had taught us how to take square roots manually and I sat down and worked that out, then figured out how to do it in binary. It turned out to be much simpler in binary.

To generate each digit (bit) you select the largest digit that when multiplied by the square root result so far is less than the input value. There's a doubling step in there too.

Doubling is just a bit shift, the fastest thing the Z8000 does. In binary the "largest digit" is always 1, which makes largest * so far the same as so far, so the whole multiply needed in base 10 drops out in binary, completely eliminated. This all just reduces to a simple compare: if so far is too big then this bit is not set & sofar = sofar * 2 (again just a shift, add new 0 bit in least significant bit), else it is set and so far = so far * 2 (another bit shift) + 1.

So we do 2 shifts & a compare for each 0 bit in the result and 2 shifts plus a compare and an increment for each 1 bit in the result. This is already faster than a single divide. Since the short motions are the most challenging, having less time to build up the queue with the simple linear timing generation algorithm, I optimized the algorithm for small distances. If the top 32 bits are 0 then we only need to do a 32 into 16 bit square root, taking half the time. For the 32/16 bit case, if the top 16 bits are 0 it turns into a square root of a 16 bit number, twice as fast again. Optimized to the byte, the shortest motions end up needing at most 8 cycles through our extremely fast 2 shift plus maybe 1 increment loop. This was screamingly fast, and immediately made the real time system work. The queues were more than long enough even for short motions. We were able to reduce the size of the queues, freeing up memory for the part program that the machine would cut and for code to be added when we added features.

They paid me $1,000 for solving their problem. I was working a near minimum wage job at the Neptune Theater for $1.45 an hour or something like that. This was more money than I made in 3 months.

This experience inclined me to go into the field I'm still in, software engineering and related technical specialties.

That gives me 41 years as a software engineer as of 2017, there can't be all that many around with more. The industry in the seventies was tiny by modern standards, and most of the engineers at that time were electrical engineers or other technical sorts who had switched over to meet the demand for the new skills, so they were already a decade into their careers for the most part. Those folk are in their seventies now and the few who did not leave the industry over the decades have largely retired now.

If I can make it to my retirement in the industry in a bit over another decade, I'll probably be one of the most experienced software engineers on the planet. I wonder how many folk who started before the mid seventies are still in the industry? Where's the leader-board when you need it?

Non-diclosure

I worked recently on a project where I got to build a new set of services from scratch and deploy them to the cloud. Since the services are intended to be used with smart phone apps, we ended up hosting on Google's cloud, GCE, since they have quite a few useful tools available for supporting smartphones like Firebase and nicely integrated credentials and authorization management.

Unfortunately, since this is a commercial product that is likely to face competition, or at least inspire competition once it is released, I'm under non-disclosure. I'm not allowed to say much about what the services I designed, built and deployed actually do.

Once the product is actually released I'll be able to blog about it, but for now any blogs on the topic (like this one) can't talk about much and therefore end up being pretty short.