随着女人对于丰胸产品的需求添加产后丰胸产品,商场中的丰胸牌子越来越多。那么到底哪一种丰胸产品好呢丰胸产品?小编今天给大家推荐一款闻名丰胸品牌——刘燕酿制丰韵霜粉嫩公主酒酿蛋,质量好,好评多。一起来认识一下吧丰胸方法!

Warning: Cannot modify header information - headers already sent by (output started at D:\InetPub\vhosts\henrylafleur.com\httpdocs\blog\wp-load.php:94) in D:\InetPub\vhosts\henrylafleur.com\httpdocs\blog\wp-includes\feed-rss2.php on line 8
Exploring Commercial Software Development https://henrylafleur.com/blog Lessons from Leading Small Commercial Systems Teams Thu, 08 Aug 2019 02:50:18 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.3 Commercial Software (And How I Define It) https://henrylafleur.com/blog/2019/08/08/commercial-software-and-how-i-define-it/ https://henrylafleur.com/blog/2019/08/08/commercial-software-and-how-i-define-it/#respond Thu, 08 Aug 2019 02:50:18 +0000 http://henrylafleur.com/blog/?p=158 Have you ever thought that you would like to make money by writing software? There are a few ways you can do this. You can be a software developer for hire, or you could write and sell your own software. Either way, you can create either custom software or commercial software.

I’ve worked in commercial software for the past 20 or so years. What do I mean by commercial software? Software meant to be sold or leased or software that produces ongoing revenue through its proliferation. This is in comparison to custom software. Examples of custom software are corporate software–software for a specific corporation, government software–custom software for a government, and open-source freeware–software created for free just for the joy of it.

Examples of commercial software are:

  • Operating systems like Windows, Mac OS, and Red Hat Enterprise Linux
  • Applications such as Microsoft Office or MatLab.
  • Services such as AWS or Azure
  • Web sites such as Confluence.
  • Apps like Candy Crush

The list could go on forever and include things like Facebook (where you are the product), Google Search (paid for by advertising), and LinkedIn (with a multi-faceted revenue model).

A third category is open source software, which has more in common with commercial software than custom software because it is usually for the masses. Open source is covered by various writings all over the web, so I won’t cover that here.

Revenue Models

There are several revenue models for commercial software.

  • Direct sales of licenses
  • Monthly or annual fee
  • Pay for support
  • Advertisements

You can use any combination of these for revenue. Ex. Sell a license and charge for support by incident, Open source software but monthly charge for support, ad supported app or platform, Cloud services such as Software as a Service (SaaS) for a monthly fee, etc.

Primary Quality Attributes of Commercial Software

  • Scalability
  • Supportability and Upgradability
  • User Experience
  • Customizability
  • Uniqueness
  • Utility or Filling a Specific Need

Commercial systems must be scalable from just one user to all users in the world. How does this work? Well think about a given commercial system and think about how it scales from one to many users. For example, Linux can scale from a watch to the largest supercomputer. Microsoft Word can scale because it is distributed and users can share files or publish documents as PDF to be shared on the web. Word has been scaled to the cloud for multiple simultaneous users and cloud storage with Office 365. Even small commercial systems are built for scale in similar ways. There must be easy ways to install, upgrade, and collaborate.

Take for example a system I helped build and manage, Resource Scheduler. This system was scalable because it ran on scalable databases, MS SQL Server and Oracle. The code itself was single threaded, so it did not scale on the processor. We had additional issues with scalibility because it was a 32-bit application, which limited RAM usage to 4GB and the application would, if pushed to its limits, run out of RAM. But because it was built on a scalable back-end, the system itself could scale in many ways. Also, because it was migrated to .NET, eventually the 32-bit components could be replaced by platform independent .NET components (even if we had to build them ourselves).

Commercial systems must be easy to support by trained technical support personnel. This allows support to be scalable by hiring non-developers to work with end users. Additionally, support should be able to upgrade users easily. An alternative would be an application that upgrades itself.

It’s helpful if applications provide logging and telemetry for errors that occur when users are using the system. These errors can be collected through error reporting at your company and then aggregated and analyzed. You can even prompt the user at the time to ask what they were doing at the time, if this is feasible.

This leads into another quality of user experience (or UX), so the end user should be able to use the system without the help of support or even a user’s manual. User experience is its own field of study and it warrants special attention. To put it briefly, user experience is the study of design that leads to an experience that is intuitive to the user and leads the user on the correct path to getting the most common tasks done easily.

Other forms of telemetry, such as recording clicks, timing of actions, or even eye tracking can help improve your user experience. There are tools and libraries for these. You can entice your users to use these features but you should always ask permission and anonymize data so you are not spying on your individual users.

Commercial systems need to be either customizable, extensible, or both. This is not a requirement, but think about many existing commercial systems. Operating systems have drivers, MS Office has macros, Photoshop has plug-ins, browsers have JavaScript and extensions, etc. Systems often need customization for reports. Ex. Invoices need the company name, address, and logo at the top.

Uniqueness sounds quaint, but commercial systems need to have their own unique traits. This is needed in order for the system to fit into a specific market segment. For example, my current employer utilizes a unique food order taking system that is fast and efficient for the cashier. It has unique menu features that support specific verticals, such as fast casual, pizza, and bars.

Summary

Commercial software must be scalable, upgradable, have a good user experience, allow for user customization to their tasks, and be unique to fill a specific need or market segment. There are many ways to make money off of commercial software, but the key thing is that you create it to make money. Thinking about the best way to protect your revenue streams is important because otherwise you won’t be able to pay the bills.

In the future, I will write more about the quality attributes required for commercial software.

]]>
https://henrylafleur.com/blog/2019/08/08/commercial-software-and-how-i-define-it/feed/ 0
C# Development in Emacs on Android https://henrylafleur.com/blog/2019/06/08/c-development-in-emacs-on-android/ Sat, 08 Jun 2019 23:56:00 +0000 http://henrylafleur.com/blog/?p=141 Why Android

I wanted to be able to do simple, multi-file development on my phone. The reason why was to do some hobby programming on my phone when I don’t have a PC. After noticing that there are not any full C# IDEs on Android, I thought about Emacs in Termux.

Termux has several limitations and no Mono Port, but I found I could run Mono under Arch Linux in Termux. With this, one can run Omnisharp and get auto-completion.

One thing I like about Termux is that it installs from the app store and doesn’t require rooting the device.

To do this, you will need about 3-4GB free on your primary storage on your Android device. Otherwise, you will run out of space installing all of the required components.

Use Arch Linux on Termux

The most up to date instructions on installing Arch on Termux are here:

https://wiki.termux.com/wiki/Arch

To repeat the page here, type the following:

pkg install wget
wget https://raw.githubusercontent.com/sdrausty/TermuxArch/master/setupTermuxArch.sh
bash setupTermuxArch.sh

This will install Arch Linux on Termux. It runs under proot (think of as pretend root+chroot). This causes a few issues with some functions, including some Emacs functions. I’ll work through those issues later.

When you launch Termux, go into Arch Linux as follows:

startarch

Note that Arch Linux takes over 1GB of storage.

Install Emacs and Mono

Once you are in Arch, install Emacs through the pacman repos:

pacman -S emacs

Now that you are running under Arch, you can install Mono:

pacman -S mono

Install Nuget

I went to the NuGet download page and got the command line version from Microsoft:

https://www.nuget.org/downloads

You can always try this as well:

wget https://dist.nuget.org/win-x86-commandline/latest/nuget.exe
mv nuget.exe /bin/nuget.exe

I then created a shell script to run Nuget called nuget in /bin:

#!/bin/bash
exec mono /bin/nuget.exe $@

Remember to make nuget executable (chmod a+x nuget). With that, you can just type nuget commands at the shell as you would in Windows:

nuget sources

Install Omnisharp

OmniSharp will give you code completion, which makes C# development much easier.

Installation outlined on the OmniSharp page does not work. This has to do with set-file-modes (i.e. chmod) in Emacs not working. Also, downloading from the web does not work either. To work around the chmod issue, run the following elisp first (type in *scratch* and end with C-j):

(defun set-file-modes (one two) nil)

Because downloading from web sites doesn’t work, this breaks the OmniSharp Server install, so you have to install it manually.

Installation is outlined here https://github.com/OmniSharp/omnisharp-emacs, but below are the steps I took.

MELPA Setup

First set up MELPA https://github.com/melpa/melpa#usage by adding this to your .emacs or init.el:

(require 'package)
(add-to-list 'package-archives '("melpa" . "https://melpa.org/packages/") t)
(package-initialize)

Omnisharp Setup

Re-start Emacs and then you can install OmniSharp-Emacs as follows:

M-x package-refresh-contents RET
M-x package-install RET omnisharp RET

Now that all of this is set up, it won’t work until the server is installed.

Manually install OmniSharp Server

There is a page in the OmniSharp project that explains how to install the server. https://github.com/OmniSharp/omnisharp-emacs/blob/master/doc/server-installation.md

When I went to install the server, I had a few issues. Below is the method I used to install the server.

First, go to this page and get the link to the mono server (NOT http) and download it. https://github.com/OmniSharp/omnisharp-roslyn/releases Make sure to use omnisharp-mono.zip. Once you have downloaded the file, unzip it:

mkdir /usr/share/omnisharp-roslyn-server
unzip omnisharp-mono.zip -d /usr/share/omnisharp-roslyn-server

Create a shell script to run the server, OmniSharp.sh in /usr/share/omnisharp-roslyn-server:

#!/bin/bash
exec mono /usr/share/omnisharp-roslyn-server/OmniSharp.exe $@

Set Up Your init.el or .emacs

Set up autocomplete as you like. I didn’t want to use the methods listed (Company mode or Flycheck), so below is how I set it up using the default installed auto-complete functionality in Emacs:

(use-package omnisharp)
(add-hook 'csharp-mode-hook 'omnisharp-mode)
(setq omnisharp-server-executable-path "/usr/share/omnisharp-roslyn-server/OmniSharp.sh")
(define-key omnisharp-mode-map (kbd ".") 'omnisharp-add-dot-and-auto-complete)
(define-key omnisharp-mode-map (kbd "<C-SPC>") 'omnisharp-auto-complete)

Working With Csproj Files

You may want to work with csproj files. For this, there is a csproj mode that contains snippets for completing the csproj XML elements. Csproj mode looks like a work in progress, but it has some usefullness. https://github.com/omajid/csproj-mode

Download the mode using git (make sure to install git with pacman -S git). Copy or link the snippets under your .emacs.d folder.

 mkdir ~/src
 cd ~/src
 git clone https://github.com/omajid/csproj-mode
 mkdir ~/.emacs.d/snippets
 ln -s ~/src/csproj-mode/snippets/csproj-mode ~/.emacs.d/snippets/csproj-mode

Install the yasnippet minor mode and csproj-mode in Emacs:

M-x package-install RET yasnippet
M-x package-install-file RET ~/src/csproj-mode/csproj-mode.el

Finally, add a hooks for the csproj mode to automatically load the snippets and yas-minor-mode (if not using yas-global-mode):

(add-hook 'csproj-mode-hook 'yas-minor-mode)
(add-hook 'yas-minor-mode-hook 'yas-reload-all)

For more information on using csproj, see the MSBuild documentation. https://docs.microsoft.com/en-us/aspnet/web-forms/overview/deployment/web-deployment-in-the-enterprise/understanding-the-project-file This is useful for those of us used to the IDE managing the sln/csproj files for us.

Below is the build file for the program below:

<Project ToolsVersion="4.0" DefaultTargets="build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <Target Name="build">
    <ItemGroup>
      <Compile Include="*.cs" />
    </ItemGroup>
    <CSC Sources="@(Compile)" 
         OutputAssembly="HelloWorld.exe" 
         EmitDebugInformation="true" />
  </Target>
</Project>

Building C# Programs

Let’s build a simple Hello World object oriented C# project. I recommend typing this in in C# mode with the Omnisharp server started to see what the experience is like.

To start the OmniSharp server in Emacs:

M-x omnisharp-start-omnisharp-server RET

Start with a string emitter interface. This way we change where the string is output later:

public interface IEmitter
{ void Emit(string message); }

And a console emitter that will output to the console. Other implementations could write anywhere else (browser, image, etc.):

public class ConsoleEmitter : IEmitter
{
  public void Emit(string message)
  { System.Console.Out.WriteLine(message); }
}

A message writer class that uses constructor dependency injection to get the class to write to the output:

public class MessageWriter {
  public string Message { get; set; }
  public IEmitter Emitter { get; set; }

  public MessageWriter(string message, IEmitter emitter)
  { Message = message; Emitter = emitter; }

  public void Emit() { Emitter.Emit(Message); }
}

A main method to pull it all together:

public class HelloApp
{
  public static void Main(string[] argv)
  {
    IEmitter con = new ConsoleEmitter();
    MessageWriter w = new MessageWriter("Hello World", con);
    w.Emit();
  }
}

Compile the app:

$ xbuild

Run the app:

$ mono HelloWorld.exe

Where Are We Now

At this point you can work with C# and csproj files in Emacs and get autocomplete with C#. You can also get help building csproj files using snippets.

With this, all you can do is build command prompt or simple .NET executables and libraries, but you can do multi-file projects with build.

Other Articles on C# and Emacs

Here are the other articles that helped my Droid/C#/Emacs journey:

]]>
Processing Unformatted Data https://henrylafleur.com/blog/2017/09/27/processing-unformatted-data/ Wed, 27 Sep 2017 03:33:20 +0000 http://henrylafleur.com/blog/?p=133 I recently began a project to import data from unstructured documents. I have done this before, and it can be challenging.
Unlike a technology such as IBM Watson, that can read prose and other unstructured data, I am speaking of data that is only structured by its formatting. Thus, it is not simply unstructured but rather not meant to be structured as a JSON result would be. Think HTML, PDFs, or scanned documents.

There is nothing really new here, but it is interesting to see the different ways that data can be represented and how it can be matched.

I will discuss several ways that the data can be processed. The first step is identifying where the data is located on the document. Second is creating a map or a heuristic that can be reused for each document to identify the data. Third is extracting the data using the map and processing it into a known format that can be used programmatically. Forth is validating the data against expected values and reporting on any problems with the data. Fifth is actually placing the data in its destination.

Identifying Data

The first thing to do is to look at the document structure and identify source data. You also need to identify the varying ways that data can be represented in the source document and document that also. For example, lets say an order comes in as either pre-paid, cash, or balance due. There may be a section of the source document that says Prepaid, Unpaid, FOB: Sender, COD, etc. You would look at many samples of the data and decide where this data can be found.

You will also need to determine how you will extract the data from the source document. You may want to just read it, based on some identifier, and then put it where it goes in your system. You may need to pre-process it, such as using OCR to read characters. If you have access to cognitive services, you may want to train them to read the data using input and sample output. You may have to use identifiers, such as labels, or even position in the document or on the page to identify the data. It will all depend on what your source document looks like and how much formatting is already there.

Sometimes it is useful to use an intermediate format. You can take unformatted plain text or PDF data and convert it to XML or JSON. This provides for separation of concerns and decouples data validation and import from data recognition.

Mapping Data

Next, you will map data by looking at the identified data and then figuring out how you want to put it in your system or database. For example, for a payment record, you may want to record an amount due, or you may want to record a payment of a specific type (Visa, Cash). You may want to record payment due based on COD and shipping amount due based on FOB (free on board sender, sender doesn???t pay shipping).

Once you have this map, you will need to codify it. Put it in a mapping table or write some code. It???s often a useful exercise to map out the data in Excel or some other format that is easy to visualize.

There are tools that let you map data from a source into a database like SQL Server. These tools include SQL Server Integration Service and Azure Logic Apps. There are other companies that have other tools such as Informatica and SAP, but these Microsoft tools are easy to start with and fairly common. When you use these tools, you simply have to extract the data and them map the data to your final destination. They are a real time saver.

Extracting the Data

Once you have a way to extract the data, you then implement the code to extract the data. For example, if the source data is HTML, you can use an HTML library to read the data. Similarly with PDFs, you can use a library like Apache PDF Box. iText is a commercial library that is good for extracting data from PDF files.

One way to map the data is to read the map, for example an XPath to an HTML or XML node, and then read the data at that location.

Validating Data and Reporting Problems

Now that the data has been extracted, it needs to be validated. The data should be validated for type and for range. For example, a birthday should be validated to be a date in the past. Decimal values for money should be validated and/or rounded as necessary.

Also, you need to validate whole entity data. For example, what if you receive an order record with no items? It may not make sense and the entire input may be rejected.

Placing Data in the Destination

If you???ve done everything above, you should be ready to insert your data. The data should be clean and sanitized and ready for insertion.

What Else?

Going forward, you would just implement the data import. The steps above aren???t that much different from users entering data into a web app. The main difference is that this process is usually fully automated and unattended. If there are validation errors, they need to go into a log and be sent to someone as a message.

There are other data types to consider also. What if you want to import image data? You would need to figure out how to separate the images in the input and then put them in storage. This could also extend to text to speech or even video.

Also, since machine learning and cognitive services are not discussed, that gives you yet another avenue to research in processing unformatted data.

I may come back to this topic in the future and give examples of each. If you need to import some unformatted data, try this process and let me know how it goes.

]]>
I Took a Free Blogging Course https://henrylafleur.com/blog/2017/09/16/i-took-a-free-blogging-course/ Sat, 16 Sep 2017 17:17:46 +0000 http://henrylafleur.com/blog/?p=130 I started following John Sonmez, AKA Simple Programmer, on Twitter, actually after he followed me. I looked at a few of his tweets and decided that it would be something I would read periodically so I followed him.

When I saw he was offering a free course on blogging, I decided to give it a try. The course was brief and simple, but I think it has useful content and makes it easy to continue blogging on a regular basis. You can read about it here on his blog.

The key to any skill is deliberate practice. The practice and process of blogging on a regular basis gives you lots of good practice writing about, in this case, software development. Because I write a fair amount of technical documentation in my job, and have for many years, I have that under my belt but not something specifically for the public.

I had already set up a blog, but I was not blogging much. I actually became motivated to start blogging again after listening to a podcast by Chris Guillebeau called Side Hustle School. The quote at the end of every episode is, ???inspiration is good, but inspiration combined with action is even better.???

I wanted a process for having a regular schedule for making blog posts. The best thing about the class is that it gets you on track with a host of topics and a plan to blog regularly. This gives you the chance to not only practice technical writing and getting feedback, but also to promote yourself. This is especially good if you are looking at finding a job or promoting your business. It also will motivate you to learn more which in turn gives you more to blog about.

So, check out John Sonmez??? short course. I can???t say I agree with everything in it, or that there is no self-promotion going on there, but it is definitely worth going through and brief enough to be worth your time. Check it out!

 

]]>
Promises, Promises in Node.js https://henrylafleur.com/blog/2016/02/25/promises-promises-in-node-js/ Thu, 25 Feb 2016 18:08:50 +0000 http://henrylafleur.com/blog/?p=118 In an earlier BLOG post, I talked about rewriting a deployment script to use promises.

Let me flash back to the problem earlier: I was using rsync to copy files remotely and was encountering network problems when too many copies were happening simultaneously. It was even causing my hosting provider to lock me out for a few minutes. rsync will not be more efficient by running parallel instances as it is already as efficient as it will be. Because the rsyncwrapper library I’m using ran things asynchronously, I ended up making a synchronous version of their rsync function using the Q library and supporting callbacks:

var Q = require('q'); // Installed via $ npm install q

// Load the rsync module
var rsync = require("rsyncwrapper"); // Installed via $ npm install rsyncwrapper
// Convert the rsync function into an asynchronous function that returns a promise.
function rsyncAsync(lRsyncOpts, reject, resolve)
{
?????? // Using the Q defer pattern 
?????? var deferred = Q.defer();

?????? rsync(lRsyncOpts, function(err, stderr, out, cmd) {
???? ??if (err) { 
???? ???????? reject(err, stderr, out, cmd);
???? ???????? deferred.reject(new Error(err));
???? ??} else {
???? ???????? resolve(err, stderr, out, cmd);
???? ???????? deferred.resolve();
???? ??}
?????? });

?????? return deferred.promise;
}

As you can see, the async version takes two callbacks, one for reject and one for resolve for the above signature. If this were a library function, I may have made those optional but this is just for my deploy script so I just refactored what I needed.

Good Advice on Promises

In order to get the promises to work, I had to do quite a bit of writing and rewriting of my script. But now the script is much better and easier to read and debug than ever before. And, I now understand promises.

The article that was the biggest help was Nolan Lawson’s We have a problem with promises. He boils promises down to three things you should do in .then():

There are three things:

  1. return another promise

  2. return a synchronous value (or undefined)

  3. throw a synchronous error

If you don’t do one of those in .then(), that’s when things get wacky. My code was still going asynchronous because of some of these mistakes (see the end of Lawson’s article for timing charts). Also, you pass .then() a function that returns a promise, not the promise itself!

Ironically, in his article, he talks about not using deferred. But, this contradicts the Q documentation (see?? and??https://github.com/bellbind/using-promise-q) which says to use deferred. I understand his point: If a library already returns a promise, you don’t need to use deferred to create a new promise. This is because you already have a promise to return. (As I was writing this, I noticed that my calls to rsyncAsync were not returning it’s promise, but were using deferred. So once something creates a promise, you don’t need to create a promise when you call it! My code keeps getting cleaner.) Also, in ES6, things should work more easily.

Using Promises to Do the Work

So to copy a folder, I wrote these functions:

function RsyncOpts()
{
    this.src = ".";
    this.dest = rsyncuser+"@"+rsyncserver+":";
    this.ssh = true;
    this.port = rsyncport;

    this.getOpts = GetOpts;

    function GetOpts(source, destPath, bLocal, sshport) 
    {
        var newObj = new RsyncOpts();

        newObj.src = source;
        // Only set the destination server if remote.
        newObj.dest = (bLocal?"":newObj.dest) + destPath;
        // Only use SSH for remote.
        newObj.ssh = !bLocal;
        // Local port: Undefined, or use the default sshport or the one passed.
        newObj.port = (bLocal?undefined:(!sshport?this.port:sshport));

        return newObj;
    }
}
// Create global/singular for RsyncOpts factory.
var rsyncOpts = new RsyncOpts();
var path = require('path');
var ds = path.sep; // Environment directory separator.

function CopyFolder(source, dest, bLocal, rsInclude)
{

?????? // Get the fully qualified source path
?????? var sSrcRoot = path.resolve(source);

?????? var dRsyncOpts = rsyncOpts.getOpts(sSrcRoot + ds, dest + ds, bLocal);
?????? // Archive mode, Include folders recursively, Exclude .git dir, Exclude all files
?????? dRsyncOpts.args = ['-a', '-f"+ */"', '-f"- .git/"', '-f"- *"'];
?????? // Include files as passed (overrides exclude all for only these files)
?????? dRsyncOpts.include = rsInclude;
?????? dRsyncOpts.recursive = true;

?????? // Return the promise to copy the folder. 
    // (hRsyncErr and ResolveRsync are functions to handle the error or success).
?????? return rsyncAsync(dRsyncOpts, hRsyncErr, ResolveRsync);
}

Notice how the return value of rsyncAsync is simply returned. We already have a promise here, so we don’t need to create another one.

I completely eliminated the use of fs-extra copy command. Rsync is much more efficient and works either locally or remotely. Also, the command required me to walk the tree because the include/exclude functionality didn’t work on a file-by-file basis.

This is much better than in my last BLOG post and much cleaner. .then() (ALLCAPS are constant folder names), we can do this:

function CopyFile(source, dest, bLocal)
{
 var lRsyncOpts = rsyncOpts.getOpts(source, dest, bLocal);

 return rsyncAsync(lRsyncOpts, hRsyncErr, ResolveRsync);
}

var rsPHPFile = ["*.php"];
var bLocal = ?; // Boolean for local or not.
var destRoot = "/home/user/public_html";
?????? CopyFile(BOWER_COMPONENTS+ds+ANGULAR+ds+ANGULAR+JS, 
???? ?????????? JOURNAL+ds+SCRIPTS+ds+ANGULAR+JS, true)
???? ??.then(function () {
???? ???????? return CopyFile(BOWER_COMPONENTS+ds+ANGULAR_RESOURCE+ds+ANGULAR_RESOURCE+JS, 
???? ?????? ?????????? JOURNAL+ds+SCRIPTS+ds+ANGULAR_RESOURCE+JS, true); })
???? ??.then(function () {
???? ???????? return EnsureFolder(PHPAR, destRoot, bLocal); } )
???? ??.then(function () {
???? ???????? return CopyFolder(PHPAR, destRoot+ds+PHPAR, bLocal, rsPHPFile); })
???? ??.then(function () {
???? ???????? return EnsureFolder(MODELS, destRoot, bLocal); } )
???? ??.then(function () {
???? ???????? return CopyFolder(MODELS, destRoot+ds+MODELS, bLocal, rsPHPFile); })
...
?????? .then(function () { console.log('Deployment Complete'); })
???? ??.done();

// Ideally, put a .catch(error handling function) here!

So with this, each copy happens in sequence. The EnsureFolder function makes sure that the folder exists first rsync doesn’t work if the target folder doesn’t exist! (Remember that .then returns a promise, so pay close attention to the function return value below.)

// Use rsync to ensure that the root folder is created.
function EnsureFolder(src, destRoot, bLocal, index)
{

?????? // Go through an array of path variables.
?????? if (Array.isArray(src)) {
    ???? ??var sPaths = src;
?????? } else {
???? ??    var sPaths = src.split(ds);
    ???? ??index = 0;
?????? }

?????? // Get the first/current destination folder path.
?????? var destFolder = "";
?????? for (i = 0 ; i <= index ; i++) {
???? ??    destFolder += sPaths[i] + ds;
?????? }

?????? // Use rsync opts to only create a folder.
?????? var lRsyncOpts = rsyncOpts.getOpts(destFolder, destRoot + ds + destFolder, bLocal);
?????? lRsyncOpts.args = ['-f"+ */"', '-f"- *"'];
?????? lRsyncOpts.recursive = false;

?????? //Rsync this folder and then recursively ensure the next folder (if required)
?????? return rsyncAsync(lRsyncOpts, hRsyncErr, ResolveRsync)
???? ??.then(function () {
???? ???????? if (index+1 < sPaths.length)
???? ?????? ??    return EnsureFolder(src, destRoot, bLocal, index+1);
???? ??});
}

So far, deployments using node.js are not as straightforward as using a shell script, but it will be cross platform running on node (I’ve yet to try to run it on Windows, though!) Also, it allows me to sharpen my JavaScript skills 🙂

Promises are allowing me to run things asynchronously when I need to, and this should increase the effectiveness of my deployments.

]]>
EDR Systems are IoT Systems https://henrylafleur.com/blog/2016/02/10/edr-systems-are-iot-systems/ Wed, 10 Feb 2016 03:17:13 +0000 http://henrylafleur.com/blog/?p=109 In oil and gas, specifically in upstream oil and gas drilling operations, the drilling activities are monitored by a system known as an electronic drilling recorder, or EDR. An EDR records various items at regular intervals (from 1 second to 10 seconds, typically). Common reading include hook load (weight on the pulley system on the rig derrick), block position (the position of the pulley system on the rig derrick), drilling rotational RPM, pump pressure, etc. Derived readings include rate of penetration (drilling speed), weight on bit, measured depth (hole depth), true vertical depth (how far down are we?), etc. Other readings include mud pit volumes, pump strokes, etc.

In order to get the various readings, EDR systems employ sensors. There are sensors for weight, fluid depth, pressure, some equipment has built-in sensors, etc. and the sensor data is acquired through a variety of protocols (Modbus, Profibus, OPC, Ethernet IP, and proprietary protocols). These are standard industrial protocols for getting data from sensors and providing them to monitoring equipment.

EDR Systems Connect Things

EDR systems allow things to be connected and monitored. They also allow the data to flow out of the rig and onto the Internet. There are a variety of ways that this is allowed. For example, a block position encoder can be connected to the EDR system to measure the block position as it moves up and down the derrick.

EDR systems can also talk to other systems, such as control systems. Many pieces of equipment have control system components that allow computer control of the equipment. These devices can be networked to EDR systems, using Internet protocols, and EDR systems can read data from these systems. Because these systems are used to control heavy equipment, it is important that there be limitations to the connectivity between EDR systems and control systems. It is probably a good idea to have control and monitoring of the network traffic between these two systems, such as to place them in two separate zones with limited access to each other.

Other systems may also have data fed out of those systems using standard oil industry protocols such as WITS or WITSML. Many EDR systems allow these types of inputs to their monitoring functionality.

Finally, EDR systems connect upstream from the rig to the Internet. Because rigs are often remotely located, the only connectivity is either over cellular networks (Edge/3G/4G) or satellite networks. This means that connectivity is often inconsistent and dependent on the weather. There is also quite a bit of electromagnetic interference on rigs that may cause issues with cellular or wireless connectivity.

These systems can support two-way communication with the Internet to both push data to the Internet and get updates from the Internet. As with control systems, security concerns are important and must be considered with this type of connectivity. The EDR systems are used for advisory functionality, sometimes referred to as SCADA (Supervisory Control and Data Acquisition), which is important to those drilling the well. The data is used to make important decisions and should be protected adequately.

Sample EDR System and Data Flow
Sample EDR System and Data Flow

EDR System as IoT System

As you can see, an EDR system is an Internet of Things. It takes things and connects them using various protocols. It then puts them on an intranet or on the Internet. It is capable of connecting all of your devices and reporting on them.

If and EDR system is an IoT system, then could standard IoT frameworks simply replace EDR systems? They would need to be durable and industrial strength to do so. This can come up because EDR systems can costs $100s to $1,000s per month. There are FOSS IoT systems, commercial IoT systems (ex. Microsoft), and Industrial Internet systems from GE. One would need to be careful with such IoT systems to make sure that all regulatory and safety standards are met (hazardous location classification, for example).

What you would gain from using an off-the-shelf IoT system is the economies of scale that you would not get with EDR systems. Since EDR systems are almost purely proprietary and are manufactured in small lot sizes, they will not be able to keep up with innovations in massively manufactured IoT systems.

Conclusion

It will be interesting to see if IoT and Industrial Internet systems will displace EDR systems, as well as other SCADA systems.??There are already vendors advertising these types of services.?? As the IoT frameworks mature and are expanded to meet more and more use cases, they will more than likely overtake the smaller proprietary systems used to do the same functions.

]]>
Node.js — and rsync https://henrylafleur.com/blog/2016/01/21/node-js-and-rsync/ https://henrylafleur.com/blog/2016/01/21/node-js-and-rsync/#comments Thu, 21 Jan 2016 17:19:00 +0000 http://henrylafleur.com/blog/?p=102 As I was moving on, I decided to refactor my code to handle publishing my web site to a remote host through my JavaScript deploy script. I decided to use rsync, so I found an rsync wrapper for Node.js to allow calls. It’s called rsyncwrapper:

// Load the rsync module
var rsync = require("rsyncwrapper"); // Installed via $ npm install rsyncwrapper

This adds the rsync function to Node. The function takes several options, as does the rsync command, and they are passed to the rsync command. To help, I created a simple factory to generate the most used options that need to be passed to the rsync function:

var rsyncOpts =
 {
 // Options for rsync.
 src: ".",
 dest: rsyncuser+"@"+rsyncserver+":",
 ssh: true,
 port: rsyncport,
 };
// getOpts returns a copy of the rsync options with varying 
// source and dest.
rsyncOpts.getOpts = function(source, destPath, sshport) 
 {
 var newObj = 
 ({
 src: source,
 dest: this.dest + destPath,
 ssh: this.ssh,
 port: (!sshport?this.port:sshport)
 });
 return newObj;
 };

I then modified my CopyFolder command to use rsync.

At first, I was walking the source folder (as in my last article) and calling rsync file-by-file. This is nonsensical (and lazy on my part). rsync is way more efficient copying a folder than file-by-file. Not only that, but it was overloading the network connections and file copies were failing. I’m not going to show this, because it is messy and overly complex. The problem was that I was using a regular expression to test each file and rsync uses globbing patterns (*.ext), so I had to add a separate parameter for the globbing patterns.

Here is an outline of the updated copy folder function. This function is a work in progress:

var ds = path.sep; // Environment directory separator.

// Deploy the site either locally or to another server.
function CopyFolder(source, dest, rxFilter, useRsync, rsInclude)
{
?????? // Get the fully qualified source path
?????? var sSrcRoot = path.resolve(source)
    // Copy using rsync options to recursively copy a folder.
    dRsyncOpts = rsyncOpts.getOpts(sSrcRoot + ds, dest + ds);
    dRsyncOpts.args = ['-a'];
    dRsyncOpts.include = rsInclude;
    dRsyncOpts.exclude = ["*~", '-f"- .git/"'];
    dRsyncOpts.recursive = true;

    rsync(dRsyncOpts, function(err, stderr, out, cmd) {
      if (err) return console.error('Error copying folder ' + source + ': ' + 
        err + " || " + stderr + " || " + cmd);
      logger.log('Sent folder ' + cmd);
    }); 
...

The rules above are a work in progress, and it’s including more than I like (because it’s not explicitly excluded). I’ve worked with rsync, but not used the advanced includes and excludes above with the -a option. Anyway, using rsync can replace the use of the fsextra copy command and should be much more efficient. I need to test it on Windows to make sure it still works there (with rsync installed). Working on Linux provides lots of standard command line functionality that helps working with local/remote files.

After doing this and testing it locally for deployment, I went to deploy to my site host. This is where I got a bunch of random errors and then everything errored out. Why? Because rsync was running in parallel and the hosting provider thought it was a DOS attack!

My next step is to use promises to serialize the requests. My current code does:

CopyFolder(src1, dest1, /\.php$/i, isRemote, ["*.php"]);
CopyFolder(src2, dest2, /\.php$/i, isRemote, ["*.php"]);
...

The regular expression is for backward compatibility. This will eventually be refactored out. Instead it should read as below to ensure the rsync’s are done serially:

CopyFolder(src1, dest1, isRemote, ["*.php"])
  .then(CopyFolder(src2, dest2, isRemote, ["*.php"]))
  .then(CopyFolder(...))
  .done(console.error("Finished copying!");

It seems that using Node.js for deployments is not a common practice, but by building up simple tools and functions it will be easy and allow for more functionality to be included in the future. Next will be promises and calling from Jenkins!

 

]]>
https://henrylafleur.com/blog/2016/01/21/node-js-and-rsync/feed/ 1
Node.js — for Scripting https://henrylafleur.com/blog/2016/01/11/node-js-for-scripting/ Mon, 11 Jan 2016 00:43:41 +0000 http://henrylafleur.com/blog/?p=91

I have to admit, JavaScript has been my favorite language since around 2000. It’s a functional language that allows you to create objects by addressing the this keyword in your functions and using the new operator. It can operate dynamically or statically. You can pass functions as objects, and objects and arrays are nearly interchangeable.

In 2000, JavaScript was almost only in the browser and you had lots of “if” statements to support multiple browsers. Fast forward to today, and JavaScript is everywhere from the client to the server to the command prompt. Of course this omits that JavaScript was on the server and used for scripting through Microsoft Windows and their plugible WSH and ASP as well as with Netscape Server. It was far superior to VBScript on Windows and IIS and allowed for both front and back end development in JS, a paradigm I followed until the release of .NET.

With the advent of Node.JS, we now have a solid end-to-end JavaScript environment for both client and server. Rather than talk about this stack, which has been covered in detail everywhere else, I want to talk about using Node as a scripting engine.

I just started using Node to script my release management and web service unit tests. Why just have Node be the front end and server? It can also support replacing Bash, PowerShell, or other scripting. On top of that, it is a cross-platform option that supports all major OS’s. It has a ton of libraries and a package manager to keep track of them, npm. One package I use is fs-extra. Below is the command for using this package:

npm install --save fs-extra

For the purpose of handling release management, I am using the built-in Node package for the file system, fs, and also the fs-extra package that has several functions to help navigate the file system more easily. At the top of my script I use these two commands:

// Load the fs and path modules.
var fs = require('fs-extra'); // Installed via $ npm install --save fs-extra
var path = require('path');

I ended up finding a bug (or at least what I think is a bug) in the fs-extra package. (I need to contact the maintainer.) When you pass a filter to the copy command on a folder, it appears to filter the source path given and not each file that needs to be copied. I wrote my own version of this command. I need to send the maintainer a suggested patch to correct this, or perhaps get clarification:

function CopyFolder(source, dest, rxFilter)
{
?????? var sSrcRoot = path.resolve(source);

?????? // Since copying folder with filter didn't work, 
    // walking path and copying.
?????? fs.walk(source)
???? ??.on('data', function (item) 
???? ??{ 
???? ?????? ??if (rxFilter.test(item.path))
???? ?????? ??{
???? ?????? ??  var sRelativePath = item.path.substr(sSrcRoot.length);
???? ?????? ??  var sDestPath = dest + sRelativePath;
???? ?????? ??  fs.copy(item.path, sDestPath, { 'clobber': true,
???? ?????? ?????? ?????? ?????? ?????? ??'preserveTimestamps': true },
???? ?????? ?????? ??function (err) {
???? ?????? ?????? ???????? if (err) return console.error('Error copying ' + item.path + ': ' + err);
???? ?????? ?????? ???????? console.log('Copied file ' + item.path);
???? ?????? ?????? ??})
???? ???????? }
???? ??} );
}

I do want to refactor some of my code using promises, but I’ll get to that later. Also, I need to not only copy folders to the local folder, I also need to rsync them to my remote site.

So I can do something like:

CopyFolder('~/Projects/WebSite', '~/public_html/', 
           /\..*html$|\.js$|\.css$|\.php$/i);

There are other tools to do this, but this gives us a nice scripting environment using the JavaScript language.

The full documentation of the built-in Node functions and objects can be found here at https://nodejs.org/dist/latest-v4.x/docs/api/. Besides expected file system functions, there are functions for interacting with the OS and the console, doing encryption, unit testing (assertions), networking, and of course a ton of modules for dealing with web connections as an HTTP server.

As I continue to explore Node.js, I will write more about the features of the scripting engine from a console point-of-view.

 

]]>
External Control of Safety-Sensitive Equipment https://henrylafleur.com/blog/2015/11/18/external-control-of-safety-sensitive-equipment/ Wed, 18 Nov 2015 04:33:24 +0000 http://henrylafleur.com/blog/?p=86 Sometimes you want to allow safety-sensitive equipment to be controlled by an outside entity. Let’s first consider control of safety-sensitive equipment by an operator.

When an operator is controlling safety-sensitive equipment, you will need to check conditions to make sure that the operator doesn’t harm people or cause damage to the equipment. For mechanical equipment, this is often achieved through interlocks and limits. Interlocks often prevent two parts of a piece of equipment from hitting each other. For example, you must have the brake on to shift out of park in a car. This interlock prevents you from taking off immediately or accidentally shifting into reverse. Limits refer to how much control an operator has over a piece of equipment. Some vehicles have governors that prevent the speed to go over a certain limit. There may be other check conditions, but these are the primary ones.

When you want some external control to come from another system, it is practical to do so via an API. But, if this is safety-sensitive equipment, you need to make sure that the API acts like the operator of the equipment and can’t directly take control over the equipment. For example, if a self-driving car has an interface for setting the destination, it should only do so as a set point. It is up to the self-driving car to determine how to get to its destination.

This can be applied to other heavy equipment in an industrial setting. For example, in factory automation a customer may want you to allow them to control some part of the process. For example, in an automated baking factory they may want to vary the oven temperature based on the humidity as a proprietary process. You may allow the oven to have an interface to vary the oven temperature, but you may only allow the temperature within a certain range. You may put in minimum and maximum temperature set-points based on food safety standards and equipment limits. The caller may request a lower temperature, but you will simply go to the minimum safe temperature.

Let’s take a more extreme example of a pacemaker. Let’s say that one company has custom logic to apply to the function of the pacemaker and wants to control the pacemaker, but does not manufacture the pacemaker. Or say this is for a medical study where you want to try different things with the pacemaker: voltage, heart-rate triggers, shock pattern, etc. There could be an interface to do this, or there could be a sub-processor such as a Java Card or other embedded system, that could run custom logic that had a loosely coupled interface to the pacemaker. The Pacemaker would have specific limits: voltage range (min and max), heart-rate range for triggers, min or max number of shocks in a time period, etc. These would need to be applied to that loosely coupled interface to make sure that the patient is not injured. This would have to be examined in a great deal of detail, but these are some of the things that would need to be considered.

With that said, I would say that external control of safety sensitive equipment is very risky and requires a great deal of testing to prevent nefarious activity or bugs that cause death, injury, or equipment damage. But this pattern is something that can be considered in product architectures where it makes business sense. The benefits of this complex and advanced functionality need to be weighed against the high cost and extreme risk of allowing such control. The important thing to remember is to limit the scope of the control significantly to make risk control possible, but do so in such a way that allows the functionality to be enhanced by the addition of external control.

 

]]>
IBM Watson for Oil & Gas https://henrylafleur.com/blog/2015/11/13/ibm-watson-for-oil-gas/ Fri, 13 Nov 2015 15:54:19 +0000 http://henrylafleur.com/blog/?p=81 I attended a demo and workshop on IBM Watson showing off what it can do for the Oil and Gas industry. IBM has assembled a strong team of Oil and Gas professionals to really deep dive into that domain. From what I saw, they are ready to take on the Oil and Gas industry and add real value.

First, IBM Watson has the ability to integrate data from disparate data sources. There’s nothing new here as this has been done for many years. The key differentiator with Watson is that it is able to relate data from data sources with free text data so that it can look at the data in context. It can use this historical and contextual data to help make predictions of what will happen.

For example, if you have as a set of morning reports which includes structured and unstructured data about depths, equipment, performance, and free-text reports, Watson will look at the morning report and read and interpret the natural language as the human brain does and use that as context about the recorded events that happened during that time period. Say there was reaming required in a certain type of well with certain characteristics, Watson could go back and find similar wells with similar characteristics where reaming was required.

Next, you can ask Watson questions. Once it has the data from morning reports and has ingested the data into it’s cognitive engine, you can ask it natural language queries. For example, “What are similar wells that required reaming in the past with similar lithology?”?? Watson could give you a set of wells that are similar and can even rank the similarity as a percentage.

One of the great things about the demo was the quality of user experience. Knowing that this was a demo of current and future technology, it was quite impressive. The user experience was such that all relevant data was mapped out and related and navigation was customized by persona/role. The use of a radial bar chart to show well similarities was quite interesting and even showed how the chart could be used to show differences by using overlays of bars.

The visualizations of the data shown in this example were to compare analogous wells?? based on key physical characteristics of the wells and basins where they are located. The example also used unstructured data that could be displayed to show well history and relevant information for the similar wells. In this case, they were looking at how to drill a well in a new area of the world by comparing it to wells drilled in similar geological areas in other parts of the world.

The visualization was interactive. The user could tell Watson what to focus on and what to ignore and it would learn from this. This would allow the results to be further refined. Additionally, Watson remembers these choices for the next time the user tries to do a similar exercise. So if they were looking for analogous wells in other areas of the world with different geological properties, Watson would use the choices used in the last exercise.

The visualizations were not limited to the radial bar chart. A map was used and all well sites could be color coded based on specific characteristics. You would see the wells highlighted by color based on the characteristics chosen. The entire demo took 20 minutes and showed that the visualizations available are extremely powerful and allow for querying of data rapidly to get to where you can use Watson’s cognitive abilities to further refine the results.

The use of KPIs and heat maps showed great user experience for operations. This matches the work of PAS and their high performance HMI work. You want to only highlight information that is the most critical to operations, especially safety and environmental, so that these high-cost events can be properly mitigated. The KPI interfaces were simple and uncluttered and brought the user straight to the problem that needed to be resolved. Here, once a failure was noted, Watson was able to correlate information about causes from not just internal data sources but also news. For example, a pond being overfull after a severe thunderstorm.

There was another example around project planning and predicting cost overruns. Because Watson can take evidence from multiple sources of data, it can be used to predict time and cost overruns using not just structured data but also reports about related activity across the enterprise.

So as opposed to statistical techniques and using models, using Watson allows a process similar to human thought to connect data from across the enterprise and across all public data sources and put it together into a full picture. It learns as it goes and gains domain knowledge. This can reduce the amount of training required of employees and can even be used to drive best practices across an organization by having Watson learn these best practices where employees can query what to do next. It can be used in conjunction with dashboards to drive users to appropriate information and actions to help make better decisions. The practical implications are impressive and, if used properly, could see huge cost savings by driving better decisions. At economies of scale, it can lead to massive savings for a company. I predict that it will disrupt many of the existing technologies used in Oil and Gas today for information display, dashboards, and KPIs by giving the addition of advisory functionality and historical knowledge to organizations.

 

]]>