Microsoft Remote Desktop for Mac

2018/03/12

Mac users have a few choices when it comes to an RDP client for MacOS. There is the one that comes included with MacOS or there is the the one from Microsoft in the App Store.

There is also a beta version of the app store version available from here which, if you like to run beta stuff to get access to new and improved features and bugs is also useful.

I blew away my Mac on the weekend and did a fresh OS install etc because reasons. Unfortunately for me I didn’t remember to save/export my config from the beta client. Fortunately I had my TM backups and was able to grab the config file out of it and copy it over to my new profile. Problem is is that the App Store and Beta versions seem to store their configs in different files. **FURIOUS EYE ROLLING**

For the benefit of other people and for my own future reference the beta version of the app stores its config in `~/Library/Application Support/com.microsoft.rdc.osx.beta`. So, to recover your settings quickly and easily, quit the app, copy the file above from your backups and then restart your Mac. After the restart when you open the beta client you should see all your configs restored.

Edit:

I had a response from the team that develops the tool to my question about this. They said:

hi Andrew, they are stored under:
~/Library/Application Support/com.microsoft.rdc.osx.beta/com.microsoft.rdc.application-data.sqlite

this will only transfer your saved desktops, remote app feeds, gateway, and usernames.
it wont transfer your passwords, as they are stored on the keychain.
also please note that this is not an officially supported scenario

Advertisements

Azure AD Connector Configuration Dumper

2018/03/06

So today I discovered that if you inspect the Azure AD Connector config via its GUI the config it gives you is actually about 5% of what is actually there. Specifically, the GUI doesn’t display the rules for OU filtering.

To work around this you can use the sync tool to display the OU filtering config. You’ll need to login as your in-prem AD sync user though to do this. If you don’t have those credentials then you can gather the config using the tool below and then turn it into an easier to review HTML output.

Be warned though, a small AD I ran this against produced a 3MB html file of stuff. There is A LOT of items in AADC that average admins wont ever see or hear about.

Microsoft/AADConnectConfigDocumenter: AAD Connect configuration documenter is a tool to generate documentation of an AAD Connect installation.


IPv6 with Synology RT2600ac via HE tunnel

2017/12/31

Very quick and dirty steps required to get IPv6 over IPv4 working in your Synology RT2600ac.

Head over to https://tunnelbroker.net and create an account. Create a tunnel service and note down the following details:

IPv6 Tunnel Endpoints

  • Server IPv4 Address
  • Server IPv6 Address
  • Client IPv4 Address
  • Client IPv6 Address
  • Routed IPv6 Prefix

Open the SRM admin page open the “Network Centre” app.

Select “Internet”. Under the “Connection” tab click the “IPv6 setup” button. Select the following settings:

  • IPv6 Setup: 6in4
  • IPv6 Address: {CLIENT IPv6 ADDRESS AS ABOVE}
  • Prefix Length: 64
  • Prefix: {ROUTED IPv6 PREFIX AS ABOVE} You dont need to include the trailing /64. Its already entered for you.
  • Remote server IPv4 address: {SERVER IPv4 ADDRESS AS ABOVE}

Press “Okay”.

Back in the “Network Centre” app, select “Local Network”. Under the “IPv6” tab, tick the box to enable IPv6 and select your “Prefix” from the drop down menu. Press Apply.

Now onto validating and testing.

Back in the “Network Centre” app, click on “Status”. Next to the “Internet Connection” heading, click the little drop down and select IPv6. It should say “Connected”.

Finally, check a client machine in your local network and make sure it has an IPv6 address auto-assigned. From that machine browse to http://ipv6-test.com/

 


Useful man pages in your browser

2017/12/21

A new useful *nix tool popped up in my Twitter timeline a while back.

http://tldr.sh/

Lede says Simplified and community-driven man pages and it does what it says on the tin.

If you’re a *nix admin you know the drill of looking up man pages for *nix tools. You’re solving some problem and need to grok an *nix command options and/or refer to a sample of how the tool is used. man {toolname} is the way to do it.

Frequently though the result is usually page and pages of esoteric information about the tool most of which you will never learn and will take you a lot of time to wrap your brain around. Sometimes there will be examples, waaay at the bottom of the page and often those usage examples are pretty light on information.

This is where http://tldr.sh/ comes in. Put your *nix tool name into the sample at https://tldr.ostera.io/ and it will display useful help. Example: https://tldr.ostera.io/tar

But it doesn’t end there. The page also has many community contributed clients. Scroll down the page at http://tldr.sh/ for the full list.

My favourite use of it is to add the https://tldr.ostera.io/ as a search provider in your browser. Chrome in my case. Open your Chrome settings and add a search engine with the config as below.

Screen Shot 2017-12-21 at 11.36.16

Now, in any search bar in Chrome you can type tldr {toolname} eg tldr tar and it will display the results right there for you. A convenient way to get useful information about *nix tools.


ASD Essential Eight and Enterprises

2017/03/16

The Australian Signals Directorate (ASD) has updated their top four mitigations for intrusions. The recommendations have now expanded to the ‘Essential Eight’.

The essential eight is a short list of the top eight mitigations govt and businesses can adopt to reduce the threat and likelihood of a successful IT security and/or information breach.

This article examines the eight recommendations and which ones I consider a *MUST* for private enterprise. Government agency’s should use their own info-sec policies to determine their own approach and that’s a topic for a different post.

Of the eight mitigations described by the ASD, the following four key items for business are critical:

  • Patch Operating Systems and Applications
  • Restrict, control, monitor and audit privileged account access and use
  • Use 2FA/MFA authentication
  • Backup data and offline storage

Lastly, the other three items are briefly described with justifications for why they aren’t a *MUST* do.

Patch Operating Systems and Applications

It goes without saying that regularly applying operating system and application patches to any system is a must. This is a well known mitigation and has been recommended practice for many years now. Operating system vendors now have mature and robust tools to allow businesses to quickly understand the patch level of existing systems and easily approve patches for distribution and installation.

Applications are a different problem. The recent Apache Struts vulnerability and the openssl Heartbleed exploit in 2014 are good examples of complex application vulnerabilities. In both these cases it wasn’t vendor products or services which were directly vulnerable but the included underlying libraries and code which was.

This type of problem is difficult to manage since, as a consumer of a SaaS offering or CoTS product, it is very difficult to know precisely what other code vendors include in their offerings.

Couple these complexities with busy teams, reticence to touch or change old or fragile systems and its no surprise that application vulnerabilities take a long time to be patched.

Robust and regular operating system and application patching is highly desired but may no be easily attainable. There are other options to further mitigate this risk however.

Automated vulnerability scanning can be used to regularly scan and interrogate systems and their software inventory for known exploitable code. These systems can present threat telemetry (or alarms) to operations teams. This allows informed decision making so that limited and valuable resources can target the areas of highest risk.

Restrict, control, monitor and audit privileged account access and use

Exploitation of privileged accounts is a common and easy vector for intrusion into any system. To make matters worse many systems have an ‘all or nothing’ configuration of privileged accounts. The most common example is the ‘Domain Administrators’ group in Microsoft Active Directory where members of the group are able to perform any action on any object or device in the domain with little-to-no audit or oversight.

The people who feed and water the systems should only elevate their access to a privileged state when they are required to do so and only as part of a controlled and audited event. A few simple rules and approaches can help realise this.

Operations staff are users too, therefore their accounts are just normal user accounts and confer no special access or privileges at any time. This approach serves as an important reminder to the operations staff that a conscious choice and action the part of the operator to elevate their access, it pushes back on the operators inclination to think ‘oh i’ll just quickly tweak this small thing’ and possibly leave something in an open state or worse still have a broad, unintended side effect. Remember, with great power comes great responsibility.

Privileged groups and roles are always empty. If the system has a role-based access control mechanism and the roles and groups with elevated privileges are empty then there are no individual user accounts to leverage to gain elevated access.

Privileged group and role memberships are controlled by policy. The IDAM system shouldn’t govern what users are members of what roles. A policy control system (in AD that’s Group Policy for example) should be used to define which accounts are members of which privileged groups. Doing this ensures that even if an attacker is able to enact a change in the IDAM system that there is another layer of control and audit which can revert the change quickly to the desired state.

Changes to privileged groups and roles are audited, monitored and alarmed. This is a critical capability to realise. The operations teams (both infra and sec) should maintain situational awareness of use of privileged actions and changes. This means that everyone knows when a privileged action is scheduled to occur so that when it occurs unexpectedly alarms can be raised and investigations commence.

Businesses insist on a ‘separation of powers’ approach to procurement and purchasing, privileged access to systems should be no different.

Use 2FA/MFA authentication

Password sharing between systems and storage of passwords in weak or reversible forms is the biggest single threat to the safety of credential information in any system. For public Internet accessible systems the viability of the username/password pair as a secure method of access control is no longer tenable. Better security of user authentication is required.

Enter two-factor authentication (2FA) or multi-factor authentication (MFA). This capability adds a third factor to the user authentication process and better yet it is something that is not easily shared and is only valid for a short period of time. Attackers can attempt to brute-force their way into an account forever and without the token will never succeed. Better yet, attempted logons without 2FA or MFA can be used to trigger alarms and prompt further investigations by ops and sec ops.

Attach the 2FA/MFA to a privileged account request or action and a strong protection against compromise of elevated privileges is realised.

Backup data and offline storage

This mitigation ensures that backup data is captured and stored in a manner that makes the data impervious to ransomware and other data loss type intrusions. Data backup is a core service component of any IT system, however, in recent years, as the incidents of ransomware has grown, stories have emerged of systems that suffered a ransomware intrusion which also managed to reach data backup systems. This has meant that businesses have not been able to restore their data because the backups are also encrypted. In some of these cases business have paid large ransoms or gone bust. Payment of the ransom is also no guarantee that that attacker will release the decryption keys to you.

Storing backup data in an offline manner is considered an ageing practice now however it is an effective mitigation for ransomware and other malicious data loss scenarios because it ensures that backup data is not subject to this type of intrusion.

In cases where offline storage is undesirable or difficult, then implement strong control and audit of backup account and system access. Treat backup access as a privileged action and ensure that backup data cannot be accessed without using 2FA or MFA. Consider a three tiered backup data storage approach where recent backup data is stored online or near-online and older colder backup data IS stored offline.

Backup data is the last line of defence against ransomware, ensure your organisation and teams are well geared and equipped to rely on it when things go badly wrong.

Whitelisting, macros and application hardening

The essential eight recommends three other strategies for mitigation which are worth mentioning since they aren’t strictly recommended for businesses here.

These three mitigations have important value in the effort to prevent and limit infosec incidents in any organisation and must not be dismissed out of hand. Their important to private business in this article is diminished for a few different reasons.

Whitelisting

Application (or process) whitelisting is a very large, complex and onerous mitigation. Imagine if you will a haystack full of needles. Each needle is a different length, diameter and type of metal. Now build a sieve which will filter all the hay and needles and ensure that only a single needle remains in the sieve. A very difficult, complex and time consuming activity.

Whitelisting has a place and there are multiple ways to achieve it, however there are few environments where the cost or complexity risks make whitelisting a must have.

Office macros

It is often said that businesses run on spreadsheets and Microsoft Excel is no exception. The macro capability that is baked into all 32 bit versions of Microsoft Office was one of the first exploit vectors and remains in use today.

Microsoft has made changes to the default security posture of office when it comes to macros however the functionality remains on by default and this is what intruders rely on.

For organisations that don’t use Office macros they are very easy to disable via Group Policy and should be mitigated this way. Better yet, deploy 64 bit Microsoft Office which doesn’t support Office macros at all.

Application hardening

It’s unclear why the blocking of Adobe Flash and Java is referred to as application hardening. Both these technologies are known exploit vectors particularly Flash and sometimes through website advertising networks.

Block Flash and Java at the perimeter of your networks. When done well, you will be shocked to see how much faster webpages load when all those advertising frames aren’t rendered. As another side bonus you will save your Internet bandwidth as well.

Final

If you have gotten this far then I congratulate you. Push on and review the additional information available from ASD on each of these topics.

 

 


JIRA, Confluence and Lets Encrypt

2017/02/17

I recently had to move a JIRA and Confluence environment to a new infrastructure stack. During the move, we also changed the TLS Certificates and instead of using one of the paid-for incumbents we decided to give Lets Encrypt a go.

Everything with the migration went smoothly. The first hurdle we hit was when we were checking the Application Integration between the two systems. The integration wasnt functioning and no amount of delete, change, recreate would fix it. The admin pages in JIRA and Confluence were both reporting SSL errors. When I dug into the actual Tomcat logs for each instance, the following errors were appearing:

Confluence:

Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException
: unable to find valid certification path to requested target

JIRA:

Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification
path to requested target

Some quick googlephoo found a few items on the internet about this, not specific to JIRA and Confluence though.

ttps://community.letsencrypt.org/t/will-the-cross-root-cover-trust-by-the-default-list-in-the-jdk-jre/134/3

http://stackoverflow.com/questions/34110426/does-java-support-lets-encrypt-certificates

The root cause of the problem is that the JRE thats included with our version JIRA and Confluence is too old and doesn’t include the Lets Encrypt root keychain in its included keystore. The above articles had references and code snippets to help get the Lets Encrypt certificates into the JRE keystore but they were all very ugly.

Its worth mentioning that Oracle JAVA JRE 1.8.0_101 DOES include the Lets Encrypt certificates.

Options at this point were:

  • Find a way to get the required certificates into the JRE keystore (the CLI method to do this is described in the Lets Encrypt community post above).
  • Install a new JRE on the servers and make JIRA and Confluence work with that. Most likely putting us out of support with Atlassian.
  • Find out if current JIRA and Confluence include the required JRE version and then upgrade JIRA and Confluence. This would need another round of testing to properly do the upgrade.

Moving to unsupported configuration was undesirable and I didnt have the time to properly dive into a new round of testing to see if newer JIRA and Confluence had the required JRE version. I did look for some detail on the Atlassian pages to determine the answer to this and wasn’t able to locate anything.

What I did find on an Atlassian page was this article https://confluence.atlassian.com/kb/unable-to-connect-to-ssl-services-due-to-pkix-path-building-failed-779355358.html which shows how to use a free JIRA and Confluence plugin to get third party root certificates into the JRE keystore using a simple web page GUI. Some small CLI steps (a cp) are still required after the plug-in does its thing but it does make the fix less likely to fail.

Recommended.


TechNet Diskspd Utility: A Robust Storage Testing Tool (superseding SQLIO)

2017/01/31

A feature-rich and versatile storage testing tool, Diskspd (version 2.0.17) combines robust and granular IO workload definition with flexible runtime and output options, creating an ideal tool for synthetic storage subsystem testing and validation.

Source: TechNet Diskspd Utility: A Robust Storage Testing Tool (superseding SQLIO)