How to Install Apps From Anywhere on Apple Mac

Previously Macs would allow you to install software from anywhere. Now you will see the error message “NMAPxx.mpkg cannot be opened because its from an unidentified developer”. If you want to fix this and enable apps to be install from anywhere, you will need to run the following command line:

sudo spctl --master-disable

Once you have run the script you should then see the “Anywhere” option in the System Preferences > Security & Privacy Tab!

Definition: Bonuscide

bonuscide

noun

Definition of bonuscide:

Bonuscide is a term used to describe incentive schemes that progressively poisons an organisation by ensuring the flow of discretionary pay is non does not serve the organisations goals. These schemes can be observed in two main ways, the loss of key staff or the reduction in client/customer base.

Bonuscide becomes more observable during a major crisis (for example covid 19). Companies that practise this will create self harm by amplifying the fiscal impact of the crisis on a specific population of staff that are key to the companies success. For example, legacy organisations will tend to target skills that the board or exco don’t understand and disproportionately target its technology teams, whilst protect their many layers of management.

The kinds of symptoms that will be visible are listed below:

  1. Rolling downside metrics: A metric will be used to reduce the discretionary pay pool, but this metric was never previously used to as an upside metric. If at some future stage the metric becomes favourable
  2. Pivot Upside Metrics: If the financial measure that was chosen in 1) improves in the future; a new/alternative unfavourable financial measure will be substituted.
  3. Status Quo: Discretionary pay will always favour the preservation of the management of status quo. Incentives will never flow to those involved in execution or change, because these companies are governed Pournelle’s Iron Law of Bureaucracy.
  4. Panic Pay: Companies that practice bonuside are periodically forced to carry out poorly thought through emergency incentives to their residual staff. This will create a negative selection process (whereby they lockin the tail performers after loosing their top talent).
  5. Trust Vacuum: Leaders involved in managing this pay process will feel compromised, as they know that the trusted relationship with their team will be indefinitely tainted.
  6. Business Case: The savings generated by the reduced discretionary compensation will be a small fraction of the additional costs and revenue impact that that the saving in compensation will have. This phenomenon is well covered in my previous post on Constraint Theory.

Put simply, if a business case was created for this exercise, it wouldn’t see the light of day. The end result of bonuscide is the creation of a corporate trust / talent vacuum that leads to significant long term harm and brand damage.

Part 2: Increasing your Cloud consumption (the sane way)

Introduction

This article follows on from the “Cloud Migrations Crusade” blog post…

A single tenancy datacenter is a fixed scale, fixed price service on a closed network. The costs of the resources in the datacenter are divided up and shared out to the enterprise constituents on a semi-random basis. If anyone uses less resources than the forecast this generates waste which is shared back to the enterprise. If there is more demand than forecasted, it will either generate service degradation, panic or an outage! This model is clearly fragile and doesn’t respond quickly to change; it is also wasteful as it requires a level of overprovisioning based on forecast consumption (otherwise you will experience delays in projects, service degradation or have reduced resilience).

Cloud, on the other hand is a multi-tenanted on demand software service which you pay for as you use. But surely having multiple tenants running on the same fixed capacity actually increases the risks, and just because its in the cloud it doesn’t mean that you can get away without over provisioning – so who sits with the over provisioned costs? The cloud providers have to build this into their rates. So cloud providers have to deal with a balance sheet of fixed capacity shared amongst customers running on demand infrastructure. They do this with very clever forecasting, very short provisioning cycles and asking their customers for forecasts and then offering discounts for pre-commits.

Anything that moves you back towards managing resources levels / forecasting will destroy a huge portion of the value of moving to the cloud in the first instance. For example, if you have ever been to a Re:Invent you will be flawed by the rate of innovation and also how easy it is to absorb these new innovative products. But wait – you just signed a 5yr cost commit and now you learn about Aurura’s new serverless database model. You realise that you can save millions of dollars; but you have to wait for your 5yr commits to expire before you adopt or maybe start mining bitcoin with all your excess commits! This is anti-innovation and anti-customer.

Whats even worse is that pre-commits are typically signed up front on day 1- this is total madness!!! At the point where you know nothing about your brave new world, you use the old costs as a proxy to predict the new costs so that you can squeeze a lousy 5px saving at the risk of 100px of the commit size! What you will start to learn is that your cloud success is NOT based on the commercial contract that you sign with your cloud provider; its actually based on the quality of the engineering talent that your organisation is able to attract. Cloud is a IP war – its not a legal/sourcing war. Allow yourself to learn, don’t box yourself in on day 1. When you sign the pre-commit you will notice your first year utilisation projections are actually tiny and therefore the savings are small. So whats the point of signing so early on when the risk is at a maximum and the gains are at a minimum? When you sign this deal you are essentially turning the cloud into a “financial data center” – you have destroyed the cloud before you even started!

A Lesson from the field – Solving Hadoop Compute Demand Spike:

We moved 7000 cores of burst compute to AWS to solve a capacity issue on premise. That’s expensive, so lets “fix the costs”! We can go a sign a RI (reserved instance), play with spot, buy savings plans or even beg / barter for some EDP relief. But instead we plugged the service usuage into Quicksight and analysed the queries. We found one query was using 60 percent of the entire banks compute! Nobody confessed to owning the query, so we just disabled it (if you need a reason for your change management; describe the change as “disabling a financial DDOS”). We quickly found the service owner and explained that running a table scan across billions of rows to return a report with just last months data is not a good idea. We also explained that if they don’t fix this we will start billing them in 6 weeks time (a few million dollars). The team deployed a fix and now we run the banks big data stack at half the costs – just by tuning one query!!!

So the point of the above is that there is no substitute for engineering excellence. You have to understand and engineer the cloud to win, you cannot contract yourself into the cloud. The more contracts you sign the more failures you will experience. This leads me to point 2…

Step 2: Training, Training, Training

Start the biggest training campaign you possibly can – make this your crusade. Train everyone; business, finance, security, infrastructure – you name it, you train it. Don’t limit what anyone can train on, training is cheap – feast as much as you can. Look at Udemy, ACloudGuru, Youtube, WhizLabs etc etc etc. If you get this wrong then you will find your organisation fills up with expensive consultants and bespoke migration products that you don’t need ++ can easily do yourself, via opensource or with your cloud provider toolsets. In fact I would go one step further – if your not prepared to learn about the cloud, your not ready to go there.

Step 3: The OS Build

When you do start your cloud migration and begin to review your base OS images – go right back to the very beginning, remove every single product in all of these base builds. Look at what you can get out the box from your cloud provider and really push yourself hard on what do I really need vs nice to have. But the trick is that to get the real benefit from a cloud migration, you have to start by making your builds as “naked” as possible. Nothing should move into the base build without a good reason. Ownership and report lines are not a good enough reason for someones special “tool” to make it into the build. This process, if done correctly, should deliver you between 20-40px of your cloud migration savings. Do this badly and your costs, complexity and support will all head in the wrong direction.

Security HAS to be a first class citizen of your new world. In most organizations this will likely make for some awkward cultural collisions (control and ownership vs agility) and some difficult dialogs. The cloud, by definition, should be liberating – so how do you secure it without creating a “cloud bunker” that nobody can actually use? More on this later… 🙂

Step 4: Hybrid Networking

For any organisation with data centers – make no mistake, if you get this wrong its over before it starts.

Iterating through the contents of the ROT (running objects table) using C#

Back in the day I had to do quite a bit of COM / .Net interop. For some reason (and I cannot remember what it was), I need to be able to enumerate the COM running objects table (ROT). I think it was to do with createboject/getobject and trying to synthesis a singleton for an exe.

The System.Runtime.InteropServices.Marshal.GetActiveObject method will return an existing instance of application registered in the Running Objects Table (ROT) by matching the prog id (or moniker) to an entry in this table.

Note, Office applications do not register themselves if another instance is already in the ROT because the moniker for itself is always the same, and cannot be distinguished. This means that you cannot attach to any instance except for the first instance. However, because Office applications also register their documents in the ROT, you can successfully attach to other instances by iterating the ROT looking for a specific document, and attaching to this document to get the Application object from this document.

The source code below contains a ROTHelper class, which among other things, demonstrates how to iterate through the ROT. Note, once you have the desired application object you will either need to use reflection or cast to the relevant COM interface to call any methods.

using System;
using System.Net.Sockets;
using System.Runtime.InteropServices;
using System.Collections;

#region TestROT Class

/// <summary>
/// Test ROT class showing how to use the ROTHelper.
/// </summary>
class TestROT
{
	/// <summary>
	/// The main entry point for the application.
	/// </summary>
	[STAThread]
	static void Main(string[] args)
	{
		// Iterate through all the objects in the ROT
		Hashtable runningObjects = ROTHelper.GetActiveObjectList(null);
		// Display the object ids
		foreach( DictionaryEntry de in runningObjects ) 
		{
			string progId = de.Key.ToString();
			if ( progId.IndexOf("{") != -1 ) 
			{
				// Convert a class id into a friendly prog Id
				progId = ROTHelper.ConvertClassIdToProgId( de.Key.ToString() ) ;
			}
			Console.WriteLine( progId );
			object getObj = ROTHelper.GetActiveObject(progId);
			if ( getObj != null ) 
			{
				Console.WriteLine( "Fetched: " + progId );
			}
			else
			{
				Console.WriteLine( "!!!!!FAILED TO fetch: " + progId );
			}
		}
		Console.ReadLine();
	}
}
#endregion TestROT Class

#region ROTHelper Class

/// <summary>
/// The COM running object table utility class.
/// </summary>
public class ROTHelper
{
	#region APIs

	[DllImport("ole32.dll")]  
	private static extern int GetRunningObjectTable(int reserved, 
		out UCOMIRunningObjectTable prot); 

	[DllImport("ole32.dll")]  
	private static extern int  CreateBindCtx(int reserved, 
		out UCOMIBindCtx ppbc);

	[DllImport("ole32.dll", PreserveSig=false)]
	private static extern void CLSIDFromProgIDEx([MarshalAs(UnmanagedType.LPWStr)] string progId, out Guid clsid);

	[DllImport("ole32.dll", PreserveSig=false)]
	private static extern void CLSIDFromProgID([MarshalAs(UnmanagedType.LPWStr)] string progId, out Guid clsid);

	[DllImport("ole32.dll")]
	private static extern int ProgIDFromCLSID([In()]ref Guid clsid, [MarshalAs(UnmanagedType.LPWStr)]out string lplpszProgID);

	#endregion

	#region Public Methods
	
	/// <summary>
	/// Converts a COM class ID into a prog id.
	/// </summary>
	/// <param name="progID">The prog id to convert to a class id.</param>
	/// <returns>Returns the matching class id or the prog id if it wasn't found.</returns>
	public static string ConvertProgIdToClassId( string progID )
	{
		Guid testGuid;
		try
		{
			CLSIDFromProgIDEx(progID, out testGuid);
		}
		catch
		{
			try
			{
				CLSIDFromProgID(progID, out testGuid);
			}
			catch
			{
				return progID;
			}
		}
		return testGuid.ToString().ToUpper();
	}

	/// <summary>
	/// Converts a COM class ID into a prog id.
	/// </summary>
	/// <param name="classID">The class id to convert to a prog id.</param>
	/// <returns>Returns the matching class id or null if it wasn't found.</returns>
	public static string ConvertClassIdToProgId( string classID )
	{
		Guid testGuid = new Guid(classID.Replace("!",""));
		string progId = null;
		try
		{
			ProgIDFromCLSID(ref testGuid, out progId);
		}
		catch (Exception)
		{
			return null;
		}
		return progId;
	}

	/// <summary>
	/// Get a snapshot of the running object table (ROT).
	/// </summary>
	/// <returns>A hashtable mapping the name of the object in the ROT to the corresponding object
	/// <param name="filter">The filter to apply to the list (nullable).</param>
	/// <returns>A hashtable of the matching entries in the ROT</returns>
	public static Hashtable GetActiveObjectList(string filter)
	{
		Hashtable result = new Hashtable();

		int numFetched;
		UCOMIRunningObjectTable runningObjectTable;   
		UCOMIEnumMoniker monikerEnumerator;
		UCOMIMoniker[] monikers = new UCOMIMoniker[1];

		GetRunningObjectTable(0, out runningObjectTable);    
		runningObjectTable.EnumRunning(out monikerEnumerator);
		monikerEnumerator.Reset();          

		while (monikerEnumerator.Next(1, monikers, out numFetched) == 0)
		{     
			UCOMIBindCtx ctx;
			CreateBindCtx(0, out ctx);     
        
			string runningObjectName;
			monikers[0].GetDisplayName(ctx, null, out runningObjectName);

			object runningObjectVal;  
			runningObjectTable.GetObject( monikers[0], out runningObjectVal); 
			if ( filter == null || filter.Length == 0 || filter.IndexOf( filter ) != -1 ) 
			{
				result[ runningObjectName ] = runningObjectVal;
			}
		} 

		return result;
	}

	/// <summary>
	/// Returns an object from the ROT, given a prog Id.
	/// </summary>
	/// <param name="progId">The prog id of the object to return.</param>
	/// <returns>The requested object, or null if the object is not found.</returns>
	public static object GetActiveObject(string progId)
	{
		// Convert the prog id into a class id
		string classId = ConvertProgIdToClassId( progId );

		UCOMIRunningObjectTable prot = null;
		UCOMIEnumMoniker pMonkEnum = null;
		try
		{
    		int Fetched = 0;
			// Open the running objects table.
			GetRunningObjectTable(0, out prot);
			prot.EnumRunning(out pMonkEnum);
			pMonkEnum.Reset();
			UCOMIMoniker[] pmon = new UCOMIMoniker[1];

			// Iterate through the results
			while (pMonkEnum.Next(1, pmon, out Fetched) == 0)
			{
				UCOMIBindCtx pCtx;

				CreateBindCtx(0, out pCtx);

				string displayName;
				pmon[0].GetDisplayName(pCtx, null, out displayName);
				Marshal.ReleaseComObject(pCtx);
				if ( displayName.IndexOf( classId ) != -1 )
				{
					// Return the matching object
					object objReturnObject;
					prot.GetObject(pmon[0], out objReturnObject);
					return objReturnObject;
				}
			}
			return null;
		}
		finally
		{
			// Free resources
			if (prot != null )
				Marshal.ReleaseComObject( prot);
			if (pMonkEnum != null )
				Marshal.ReleaseComObject ( pMonkEnum );
		}
	}

	#endregion
}

#endregion

Windows: Delete the Icon Cache

I remember getting weird flashing on my laptop and eventually figured out my icon cache was full. So if you ever get this, try running the script below. This is obviously quite a weird/random post – hope its helpful 🙂

cd /d %userprofile%\AppData\Local\Microsoft\Windows\Explorer 
attrib –h iconcache_*.db 
del iconcache_*.db 
start explorer
pause

The Least Privileged Lie

In technology, there is a tendency to solve a problem badly by using gross simplification, then come up with a catchy one liner and then broadcast this as doctrine or a principle. Nothing ticks more boxes in this regard, than the principle of least privileges. The ensuing enterprise scale deadlocks created by a crippling implementation of least privileges, is almost certainly lost on its evangelists. This blog will try to put an end to the slavish efforts of many security teams that are trying to ration out micro permissions and hope the digital revolution can fit into some break glass approval process.

What is this “Least Privileged” thing? Why does it exist? What are the alternatives? Wikipedia gives you a good overview of this here. The first line contains an obvious and glaring issue: “The principle means giving a user account or process only those privileges which are essential to perform its intended function”. Here the principle is being applied equally to users and processes/code. The principle also states only give privileges that are essential. What this principle is trying to say, is that we should treat human beings and code as the same thing and that we should only give humans “essential” permissions. Firstly, who on earth figures out what that bar for essential is and how do they ascertain what is and what is not essential? Do you really need to use storage? Do you really need an API? If I give you an API, do you need Puts and Gets?

Human beings are NOT deterministic. If I have a team of humans that can operate under the principle of least privileges then I don’t need them in the first place. I can simply replace them with some AI/RPA. Imagine the brutal pain of a break glass activity every time someone needed to do something “unexpected”. “Hi boss, I need to use the bathroom on the 1st floor – can you approve this? <Gulp> Boss you took too long… I no longer need your approval!”. Applying least privileges to code would seem to make some sense; BUT only if you never updated the code and if did update the code you need to make sure you have 100px test coverage.

So why did some bright spark want to duck tape the world to such a brittle pain yielding principle? At the heart of this are three issues. Identity, Immutability, and Trust. If there are other ways to solve these issues then we don’t need to pain and risks of trying to implement something that will never actually work, creates friction and critically creates a false sense of security. Least Privileges will never save anyone, you will just be told that if you could have performed this security miracle then you would have been fine. But you cannot and so you are not.

Whats interesting to me is that the least privileged lie is so widely ignored. For example, just think about how we implement user access. If we truly believed in least privileges then every user would have a unique set of privileges assigned to them. Instead, because we acknowledge this is burdensome we approximate the privileges that a user will need using policies which we attach to groups. The moment we add a user to one of these groups, we are approximating their required privileges and start to become overly permissive.

Lets be clear with each other, anyone trying to implement least privileges is living a lie. The extent of the lie normally only becomes clear after the event. So this blog post is designed to re-point energy towards sustainable alternatives that work, and additionally remove the need for the myriad of micro permissive handbrakes (that routinely get switched off to debug outages and issues).

Who are you?

This is the biggest issue and still remains the largest risk in technology today. If I don’t know who you are then I really really want to limit what you can do. Experiencing a root/super user account take over, is a doomsday scenario for any organisation. So lets limit the blast zone of these accounts right?

This applies equally to code and humans. For code this problem has been solved a long time ago, and if you look

Is this really my code?

AWS: Making use of S3s ETags to check if a file has been altered

I was playing with S3 the other day an I noticed that a file which I had uploaded twice, in two different locations had an identical ETag. This immediately made me think that this tag was some kind of hash. So I had a quick look AWS documentation and this ETag turns out to be marginally useful. ETag is an “Entity Tag” and its basically a MD5 hash of the file (although once the file is bigger than 5gb it appears to use another hashing algorithm).

So if you ever want to compare a local copy of a file with an AWS S3 copy of a file you just need to install MD5 (the below steps are for ubuntu linux):

# Update your ubunto
# Download the latest package lists
sudo apt update
# Perform the upgrade
sudo apt-get upgrade -y
# Now install common utils (inc MD5)
sudo apt install -y ucommon-utils
# Upgrades involving the Linux kernel, changing dependencies, adding / removing new packages etc
sudo apt-get dist-upgrade

Next to view the MD5 hash of a file simple type:

# View MD5 hash of
md5sum myfilename.myextension
2aa318899bdf388488656c46127bd814  myfilename.myextension
# The first number above will match your S3 Etag if its not been altered

Below is the screenshot of the properties that you will see in S3 with a matching MD5 hash:

Using TPC-H tools to Create Test Data for AWS Redshift and AWS EMR

If you need to test out your big data tools below is a useful set of scripts that I have used in the past for aws emr and redshift the below might be helpful:

install git
 sudo yum install make git -y
 install the tpch-kit
 git clone https://github.com/gregrahn/tpch-kit
 cd tpch-kit/dbgen
 sudo yum install gcc -y
 Compile the tpch kit
 make OS=LINUX
 Go home
 cd ~
 Now make your emr data
 mkdir emrdata
 Tell tcph to use the this dir
 export DSS_PATH=$HOME/emrdata
 cd tpch-kit/dbgen
 Now run dbgen in verbose mode, with tables (orders), 10gb data size
 ./dbgen -v -T o -s 10
 move the data to a s3 bucket
 cd $HOME/emrdata
 aws s3api create-bucket -- bucket andrewbakerbigdata --region af-south-1 --LocationConstraint=af-south-1
 aws s3 cp $HOME/emrdata s3://andrewbakerbigdata/emrdata --recursive
 cd $HOME
 mkdir redshiftdata
 Tell tcph to use the this dir
 export DSS_PATH=$HOME/redshiftdata
 Now make your redshift data
 cd tpch-kit/dbgen
 Now run dbgen in verbose mode, with tables (orders), 40gb data size
 ./dbgen -v -T o -s 40
 These are big files, so lets find out how big they are and split them
 Count lines
 cd $HOME/redshiftdata
 wc -l orders.tbl
 Now split orders into 15m lines per file
 split -d -l 15000000 -a 4 orders.tbl orders.tbl.
 Now split line items
 wc -l lineitem.tbl
 split -d -l 60000000 -a 4 lineitem.tbl lineitem.tbl.
 Now clean up the master files
 rm orders.tbl
 rm lineitem.tbl
 move the split data to a s3 bucket
 aws s3 cp $HOME/redshiftdata s3://andrewbakerbigdata/redshiftdata --recursive

Setting up ssh for ec2-user to your wordpress sites

So after getting frustrated (and even recreating my ec2 instances) due to a “Permission denied (publickey)”, I finally released that the worpress builds by default as set up for SSH using the bitnami account (or at least my build was).

This means each time I login using ec2-user I get:

sudo ssh -i CPT_Default_Key.pem ec2-user@ec2-13-244-140-33.af-south-1.compute.amazonaws.com
ec2-user@ec2-13-244-140-33.af-south-1.compute.amazonaws.com: Permission denied (publickey).

Being a limited human being, I will never cope with two user names. So to move over to a standard login name (ec2-user) is relatively simple. Just follow the below steps (after logging in using the bitnami account):

sudo useradd -s /bin/bash -o -u id -u -g id -g ec2-user

sudo mkdir ~ec2-user/
sudo cp -rp ~bitnami/.ssh ~ec2-user/
sudo cp -rp ~bitnami/.bashrc ~ec2-user/
sudo cp -rp ~bitnami/.profile ~ec2-user/

Next you need to copy your public key into the authorised keys file using:

cat mypublickey.pub >> /home/ec2-user/.ssh/authorized_key

Next to allow the ec2-user to execute commands as the root user, add the new user account to the bitnami-admins group, by executing the following command when logged in as the bitnami user:

sudo usermod -aG bitnami-admins ec2-user

Linux: Quick guide to the CD command – for windows dudes :)

Ok, so I am a windows dude and only after docker and K8 came along did I start to get all they hype around Linux. To be fair, Linux is special and I have been blown away with the engineering effort behind this OS (and also glad to leave my book of Daniel Appleman win32 api on the shelf for a few years!).

What surprises me with Linux is the number of shortcuts and so before I forget them I am going to document a few of my favorites (the context here is that I use WSL2 a lot and these are my favorite navigation commands).

Exchanging files between Linux and Windows:

This is a bit of a pain, so I just create a symbolic link to a windows root directory in my linux home directory so that I can easily copy files back an forth.

cd ~
ln -s /mnt/c/ mywindowsroot
cd mywindowsroot
ls
# copy everything from my windows root folder into my wsl linux directory
cp mywindowsroot/. .

Show Previous Directory

cd --

Switch back to your previous directory

cd -

Move to Home Directory

cd ~
or just use
cd

Pushing and Popping Directories

Pushd and popd are Linux commands in bash and certain other shell which saves current working directory location to memory or brings to the directory from memory and changes to this directory, respectively. This is very handy when your jumping around but don’t want to create symbolic links.

# Push the current directory onto the stack (you can also enter an absolute directory here, like pushd /var/www)
pushd .
# Go to the home dir
cd
ls
# Now move back to this directory
popd
ls