<![CDATA[ Blog | Unimus by NetCore j.s.a. ]]> https://blog.unimus.net https://blog.unimus.net/favicon.png Blog | Unimus by NetCore j.s.a. https://blog.unimus.net Fri, 29 Mar 2024 11:13:34 +0000 60 <![CDATA[ Release Overview - Unimus 2.4.0 ]]> https://blog.unimus.net/release-overview-unimus-2-4-0/ 65003a75f6bf640001854730 Tue, 12 Dec 2023 18:14:02 +0000 A new major Unimus release - 2.4.0 is finally here! With it come new features, reworks and improvements, new device support, and as always, bug fixes. This article is dedicated to the highlights of the 2.4.0 release, and the full Changelog is available at the bottom.

We have also published a video overview on our YouTube channel:


Mass Config Push macros overhaul

MCP macros screencap

Mass Config Push is a powerful Unimus feature for automation made even more versatile when used with macros. Macros (modifiers, actions and user variables) allow you to build complex configuration deployments, firmware upgrade procedures or maintenance tasks.

In 2.4.0 the modifier syntax was changed to a "$(modifier-name yes/no)" format in order to minimize overall ambiguity and new modifiers "fail-on-error" and "wait-echo" were added. Format used for all actions is now "$(action-name value)". A new delay "action" was also introduced which simply waits for a specified time. Positions within a line for modifiers and actions are now enforced. Check out the wiki for more info.


NMS Sync enhancements

0:00
/

The old NMS Sync logic served well for three long years, though it wasn't without its shortcomings. The robust rework introduced in 2.4.0-Beta5 handles syncing from any combination of different systems or different instances of the same system.

We've introduced the concept of device orphaning in order to track changes on the NMS. A device becomes orphaned when it is no longer being provided by the NMS. This is useful in reflecting on Unimus the current state of the remote NMS that is being synced.

Two new settings were added to the NMS Sync Preset configuration to accommodate the flexibility of the Sync logic. Device Action policy controls creating new devices versus moving existing ones to a target Zone described by a Sync Preset Rule. With Orphaning policy Unimus can keep, unmanage or delete orphaned devices at the end of a Sync.

Tl;dr: NMS Sync now handles added, changed, removed and disabled devices in virtually any complex setup and will create / move / delete or disable local devices to reflect the state on the NMS. The full NMS Sync rework rundown can be found in this blog article and on our YouTube channel.


SSH handling support

Multiple improvements were made on the SSH connection establishment and session stability. Among few examples are:

  • Devices which don't play well when "none" SSH auth method is offered are now supported.
  • Login banner recognition has been adjusted to handle the quirkiest of banner types.
  • SSH version validation timeout can now be overridden to accommodate device types that need a little extra time to respond. Details in configuration documentation on our wiki.

Zabbix and LibreNMS connectors

By popular demand we've added an option to import Zabbix hosts by their assigned Templates and Tags, using '%' and '@' prefixes respectively. Existing Sync Presets will continue working as expected. More info in the Functionality > Import > Zabbix section on the wiki.

We have also added new "Address field priority" and "Description field priority" selectors for LibreNMS. These allow you to configure which fields from Libre are pulled into device information in Unimus.


Other features and new device type support

Minor features such as the obligatory NetXMS client library update, improved built-in backup filters, UI/UX touch-ups and Zone support hot-fix for APIv2 come included in the 2.4.0 package.

23 new device types joined the list of devices supported by Unimus in this release. Full Changelog below for more info.


Fixes of various shape and form

Wouldn't be a major release without a healthy dose of bug fixes. One to mention is Remote Core connections could be seen as up, even after they were closed, which prevented the same Remote Core to reconnect.

One security fix applied involved read-only users being able to add a new Zone.


Finally, here is the full 2.4.0 Changelog:

= Version 2.4.0 =
Features:
  Updated NetXMS client library to latest version (4.4.4)
  Added filtering of log messages inside Cisco SMB switch backups
  Added a built-in backup filter for new timestamp format in MikroTik RouterOS v7.10
  Improved built-in backup filters for newer versions of Ubiquiti EdgeSwitch X
  Improved handling of errors for Zones which use a NetXMS Agent as the Zone's proxy
  Added possibility to search by Credential Type in "Credentials > Device credentials" table
  Various minor UI and UX fixes and improvements
  Added support for devices which don't respond to the "none" SSH auth method
  Improved login banner recognition logic, more banner types are now supported
  If a DNS lookup for a device hostname fails, this will now be reported as an exact job failure reason
  Added support for session restoration prompts after login (for example on Cisco ISE)
  Added support for CLIs which don't echo the "?" when receiving commands like "show ?"
  Added the option to override the SSH version validation timeout (new "unimus.core.ssh-version-validation-timeout" setting)
  Added support for multi-partition backup on F5 devices
  Added support for all possible formats of user and root prompts in OPNsense
  Added support for output termination in newer versions of VyOS
  Added support for Cisco SMB switches which don't report their model on the CLI
  Added support for Linux shell login on netElastic vBGN
  Added support for output termination in paged output on netElastic vBGN
  Improved support for Adtran NetVanta devices
  Improved support for logins to JunOS in BSD mode
  Improved handling of quoted strings on MikroTik RouterOS v7
  Added support for paginated output on RAD devices
  Added support for backup multipliers in the Cisco WLC driver
  Improvements to the CLI mode change algorithm (better handling of specific edge cases)
  Improved handling of error messages when Unimus config file is missing

  Config Push modifiers were improved and reworked:
    - modifier syntax was changed to a "$(modifier-name yes/no)" format
    - enforced modifier and action positions within a line
    - added support for new modifiers "fail-on-error", "wait-echo" and their opposites (yes/no)
    - added support for a "delay" action, which simply waits for a specified time
    - all existing Config Push presets should be migrated to the new syntax automatically
    - full documentation: https://wiki.unimus.net/display/UNPUB/Mass+Config+Push

  Major improvements to NMS Sync:
    - devices no longer present in NMS can now be automatically Unmanaged / Deleted in Unimus
    - improved tracking of which local device corresponds to which NMS device, allowing to move devices locally when they are moved in the NMS
    - if a device is not found locally in the target Zone, you can now specify if Unimus looks for a candidate to move into the Zone, or creates a new device
    - allow specifying what scope Unimus searches in for move candidates when trying to move devices across Zones
    - fixed multiple issues that arose in setups where multiple NMSes were being imported from into the same Zone
    - more info at https://blog.unimus.net/new-nms-sync-logic-2-4-0/

  Improvements to the Zabbix NMS Sync connector:
    - added support for importing from Templates and Tags on top of existing options
    - introduced new prefixes for various import sources
    - existing Sync Presets should be migrated automatically, and continue working as expected
    - full documentation: https://wiki.unimus.net/display/UNPUB/Zabbix+importer

  Improvements to the LibreNMS NMS Sync connector:
    - added "Address field priority" selector, allowing to specify how Unimus pulls device addresses from Libre
    - added "Description field priority" selectors, allowing to specify how Unimus pulls device descriptions from Libre

  APIv2 improvements:
    - add optional query param to select zone for the "findByAddress" endpoint at "api/v2/devices/"
    - add option to specify Zone for the "createDevice" endpoint at "api/v2/devices"
    - add option to specify Managed State for the "createDevice" endpoint at "api/v2/devices"
    - add option to specify Managed State for multiple GET and UPDATE endpoints at "api/v2/devices"

  APIv3 improvements:
    - added possibility to search by "usedByDevices", "boundToDevices" and "credentialsTypes" in "api/v3/credentials" endpoint

  Added support for:
    - Adtran NetVanta chassis
    - ADVA FSP 1xx series
    - AricentOS devices
    - more variants of the Aruba Mobility Controller
    - Calix AXOS
    - Calix E7-2
    - Cambium cnPilot
    - Casa vCCAP
    - Cisco Catalyst 1200 series switches
    - Cisco ISE
    - ComNet Switches (based on CNGE11FX3TX8MS)
    - EdgeCore 7316
    - EdgeCore CSR320
    - Ericsson IPOS (SSR series)
    - Ericsson SGSN
    - F5 multi-partition
    - Grandstream GWN7800 series switches
    - improved netElastic vBGN support
    - Opengear Operations Manager
    - Radware Alteon
    - Ruckus vSZ-D
    - Ruckus vSZ-E
    - TRENDnet TI switches
    - Westermo L110
    - Westermo Lynx-5512
    - Westermo RedFox-5728
    - Westermo WeOS

Fixes:
  Fixed inter-connection delay was not applied for Telnet service availability check
  Fixed logs present in backups on Cisco SMB switches (would trigger new change-points and change notifications on every backup)
  Fixed NMS Sync from Zabbix versions 6.2.1 and newer within 6.2 was not working (6.4 and older than 6.2 worked properly)
  Fixed for Zones which use a NetXMS Agent as a proxy all tasks within a job would fail if a single tasks failed
  Fixed inter-connection delay was not applied for NetXMS TCP proxy connections
  Fixed elements in combo box sometimes appearing multiple times in multiple screens across the application
  Fixed elements in combo box sometimes missing in multiple screens across the application
  Fixed beginning of lines could be truncated in diff the view on specific browser configurations
  Fixed reporting wrong Last Job Status for unmanaged devices over API (multiple APIv2 "/devices" endpoints)
  Fixed attempting to input a very long FQDN into the DB address during the Deploy Wizard was not possible
  Fixed Credential usage could be counted twice if a credential was used for both for SSH and Telnet (Credential > Usage screen)
  Fixed CLI Mode Change password usage could be counted twice if a credential was used for both for SSH and Telnet (Credential > Usage screen)
  Fixed an error that could occur if you switched screens while multiple popup windows were opened
  Fixed the possibility to input extremely long strings into dropdowns, which would eventually trigger an error
  Fixed Config Push triggered via API with an empty device ID string would create a wrong entry in Push results
  Fixed Delete Push Job History retention job was not re-scheduled when default schedule is changed
  Fixed issues with Config Push presets being deleted while they were opened in another browser window
  Fixed various minor UI and UX issues and inconsistencies
  Fixed jobs using Telnet could randomly fail
  Fixed login to devices could fail if certain login banners were used
  Fixed Remote Core would not be able to reconnect to the Server in specific cases
  Fixed Remote Core connections could be considered still alive even after the connection was closed
  Fixed jobs on Cambium 450i would always fail
  Fixed jobs on newer versions of VyOS failing
  Fixed login failing on specific Palo Alto devices
  Fixed specific commands on Aruba Mobility Controller (ArubaMM) could cause a Config Push to fail
  Fixed backups could fail on Cisco WLC under heavy load, or with very large configs
  Fixed jobs on specific Moxa switch types could randomly fail
  Fixed jobs on specific RAD devices would fail
  Fixed jobs on specific Adtran NetVanta devices would fail
  Fixed discovery failing on OPNsense with specific account and shell type combinations
  Fixed discovery would fail on JunOS devices with specific BSD prompt format
  Fixed discovery would fail for specific versions of the Aruba Mobility Controller
  Fixed sporadic config change notifications on MikroTik RouterOS v7

Security fixes:
  Fixed read-only users could add a new Zone
  Fixed Credentials and CLI Mode change passwords could be printed to the log file in cleartext on specific API calls

Embedded Core version:
  2.4.0

Migration warnings:
  On MikroTik RouterOS v7, you can get a single config change notification due to changes in how quoted strings
  are handled in our ROSv7 driver. This config change should only happen on the first backup job after upgrade
  and can be ignored.
]]>
<![CDATA[ NMS Sync improvements in Unimus 2.4.0 ]]> https://blog.unimus.net/new-nms-sync-logic-2-4-0/ 6538dc51d84df10001511f2a Tue, 12 Dec 2023 02:41:58 +0000 The Unimus 2.4.0 release brings a complete rewrite of the NMS Sync feature logic. It gives you more options and control over Sync behavior, with minimal complexity added to the user experience.

The key addition to the functionality is device orphaning. Its role is to keep track of changes in device presence and state provided by the import source (NMS). Device additions, changes or removals are now automatically reflected in local inventory (Unimus).

There are two new settings to notice in the GUI of NMS Sync Preset configuration. These control Device action policy and Orphaning policy described in detail in sections below.

Lastly, we have also added two new group identifiers for Zabbix. We can now select devices to import that are using specific templates and/or tags by prefixing their names by '%' and '@', respectively. Host groups are used as before without any prefix.

Showcase of new features

In case you prefer a video format, you can watch an overview of the changes here:

How it works

The NMS Sync is a feature for importing devices into Unimus. Synchronization of devices from NMS systems automates the adoption of new networks into Unimus and simplifies device management by delegating the task entirely to the NMS (no need to manage devices across both Unimus and the NMS separately).

Since 2.1.0 the NMS Sync configuration is Preset-based, where each Sync Preset defines from which NMS to sync which devices into which Zone(s). The rework in 2.4.0 expands the Preset-based mechanics by Preset adoption. Each device imported by an NMS Sync Preset is now adopted by that given Sync Preset, allowing consistent identification of synced devices.

NMS Sync takes a device set provided by import source (NMS) and mirrors it in the local inventory (Unimus). Let's take a look at steps needed for a run-of-the-mill setup of an NMS Sync Preset:

  1. Create NMS Sync Preset of your NMS type specifying URL and credentials
  2. Create a Sync Rule specifying which groups of devices in the NMS should be imported
  3. Specify this Rule's target Zone to import devices into
  4. Go to step 2 if you want to define another Rule
  5. Leave other settings at default values
  6. Run and/or schedule the NMS Sync

After an NMS Sync runs its course it is considered successful if there were no errors or failures, otherwise it goes into the 'Failed jobs' bucket. Detailed report of either can be viewed in 'Dashboard > Import job history'. There we can find all the information about the import source, sync errors, processed, imported, updated devices and failed operations counts.

Screencap of successful NMS sync result
Successful NMS sync result

The nature of the beast

Unimus currently supports seven different NMSes. You may create Sync Presets with different systems and sync to the same Zone or you may have different instances of the same NMS and sync to different Zones. These systems may or may not support universally unique identifiers (UUIDs) for devices. You may even want to move devices between Zones manually and still be sure of what happens to them following an NMS Sync.

The logic needed to behave consistently during any changes, be it on the NMS, in Sync Preset Rules configuration or other settings in Unimus. It also needs to account for devices with matching address within and outside of a Sync Rule target Zone. Suffice to say the logic rework turned out to be quite the undertaking.

The following sections take a closer look at concepts, behaviors and policies introduced with the NMS Sync rework.

Remote UUIDs

As mentioned before, some of the NMSes Unimus supports provide UUIDs for devices during a Sync operation. These uniquely identify previously imported devices, enabling more consistent device tracking between Unimus and the remote systems. As an example, when UUIDs are used, changing device IP address of a host on the NMS would also result in local device IP address update. This would not be possible without UUIDs, since IP address, that has been changed, is used as the identifying parameter if a UUID is not present.

NMSes that support UUIDs: LibreNMS, NetXMS, Observium and Panopta

Device Action policy

This policy controls the behavior when syncing a device from the NMS. It can be either to always try to Create a new device for the one provided by import source, or try to Move an existing one within a scope, if it is a match.

The default setting is Move within Preset Zones. The "Preset Zones" scope consist of Zones used by Sync Preset's Rules. Before creating a new device (in the target Zone a Rule specifies) the Sync logic will look for an existing device with matching UUID or address within the scope and moves it to the target Zone if one is found. The benefit is keeping track and moving of devices in response to changes in the NMS, which ensures duplicates are not created.

The Move from All Zones option is similar to the previous one, but a system wide scope is used when looking for matching devices to move to the Rule target Zone.

Policy setting No Move/Always Create is used to turn off the moving of devices during the NMS Sync. Only target Zone is looked at when matching a device provided by import source. If a device with the same UUID or address is not present in this Zone, it is created. If it is found, it has its attributes updated. This is handy for environments using same addressing for multiple device in multiple Zones, such as an MSP managing external networks.

Note: moving of existing devices fails if two or more devices with the same identifier are found in the scope, as the algorithm has no way of determining which device the user intended to "move" and counts it as 'failed to update'.

Device Adoption

An adopted device is simply a device that was imported from an NMS by a Sync Preset. Device adoption designates its parent Sync Preset as the only one that manages any changes on it. An existing device in local inventory is eligible for adoption by a Sync Preset when it is not adopted (manually created device) or when it is orphaned. Devices can become orphaned in multiple cases:

  • device is no longer present in the device set provided to Unimus by the NMS (was deleted on the NMS, or moved outside the devices Unimus imports)
  • when the Zone was changed in a Sync Rule, all devices adopted by this Sync Rule are orphaned
  • when the Sync Rule is deleted, all devices adopted by this Sync Rule are orphaned
  • when a device is moved to a different Zone manually by the user (this includes the result of Zone deletion with moving of devices to the default Zone)

Orphaning status can be checked via device info.

Screencap of Device info
Device info showing 'Orphaning reason: Device not present on NMS'

Adoption is useful for associating not already adopted devices on Unimus with those on the NMS.

Orphaning policy

Orphaning policy determines what happens to orphaned devices during an NMS Sync. User has three options that determine what action to take on orphaned devices during an NMS Sync:

  1. No action - device is kept as is, though it is now eligible for adoption by any NMS Sync Preset
  2. Unmanage - device becomes unmanaged in Unimus, no further jobs will run on it. It will be kept in Unimus, along with any existing backups; it is also eligible for another adoption
  3. Delete - device will be deleted from Unimus, alongside its backups, so this option should be used with a clear intent and a dose of awareness of possible results

Example scenario

Let's simulate a scenario where we want to defer ongoing device management to NMS in a way that any change to device set on NMS is reflected in local inventory on Unimus.

The starting point on Unimus is a set of devices not adopted by any Sync Preset. The first thing that will happen is pairing the devices between the systems and adopting them by our Sync Preset. In addition the devices will also be moved to Zones corresponding with organizational structure present on the NMS. Next, any other devices not present on Unimus will be imported.

This is easily achieved by setting up a Sync Preset with Preset rules specifying organizational groups on the NMS and pointing to specific Zones. Since we want the existing devices moved to specific Zones and we want the algorithm to look for them throughout entire Unimus we will use the 'Move from All Zones' Device Action policy.

0:00
/

After running the NMS Sync we see a successful import notification. Devices are now properly adopted and organized among Zones. Automatic device synchronization is achieved by setting the Sync preset as a 'Scheduled sync'. Any changes, additions or removals of devices in organizational groups on the NMS are reflected in local inventory after the next NMS Sync.

Final words

Congratulations, dear reader, on making it this far! You are now acquainted with the updates to the NMS Sync feature in Unimus. Feel free to experiment with it, and let us know what works and if something doesn't on our Forum.

]]>
<![CDATA[ Generating reports from Unimus job failures ]]> https://blog.unimus.net/generating-reports-from-unimus-job-failures/ 65098185f6bf640001854a03 Tue, 07 Nov 2023 16:07:23 +0000 “Why do our backups fail, Bruce? So that we can learn to fix them.”

Intro

With hundreds or even many thousands of jobs daily for some Unimus users, there are bound to be a few failed ones. Jobs such as Discovery or Backup fail for various reasons related to connection errors, refused credentials or unsupported devices. The results of failed jobs are all captured and error logs are viewable in the web GUI. From error logs one can learn the details of why a discovery or a backup for a device failed and use the information to remedy the situation.

In this brief article we will go over generating job report exports. These export files can then be processed externally by a monitoring system, reporting system, by an external parser, or by hand. But first, the error log data needs to be retrieved. The data is stored in the database, so to access it we need the credentials to the database and a few well constructed queries. We will be showing how to generate reports of failed jobs and review them by hand in a spreadsheets editor (Excel) for its advanced data processing functions.

Getting to the data

Unimus supports MySQL, PostgreSQL and MSSQL relational databases in addition to file based HSQL. The former ones store data in tables with columns and rows. Databases use structured query language (SQL) for data manipulation and querying. We'll use such queries to extract the data about failed jobs and export it to a text file.

Connecting to the DB

In our test scenario an arbitrary Unimus server is using MariaDB, a database system, which is a fork of MySQL. For accessing MariaDB we are going to simply run the mariadb client from shell. Need to specify the host IP, database name and credentials:

mariadb --host=127.0.0.1 --database=unimusmdb --user=will -password

Constructing the query

Let's say we are interested in failed Discovery jobs. So for each device we want to get some info about the device itself and error logs for last job that was a failed Discovery.

We will be using data from tables device, device_history_job and zone. The device table contains useful columns like id, address, description, model, type and vendor. The device_history_job table is populated by useful data in the create_time, error_log, info, job_type, device_id and successful columns and the zone table is used to describe device zone membership.

We SELECT columns we want displayed FROM the tables and LEFT JOIN table device_history_job ON id of 'last device job that was a Discovery' via a subquery and table zone ON id of the zone. Then we filter the results with WHERE by 'failed jobs'. And let's say we want to limit the results to recent ones, e.g. ones that took place in the last week. Our query then might look something like this:

SELECT
  d.id,
  dhj.info,
  DATE_FORMAT(FROM_UNIXTIME(dhj.create_time), '%H:%i:%s %d.%m.%Y'),
  z.name,
  dhj.job_type,
  REPLACE(dhj.error_log, '\r\n', ' ') AS error_log
FROM device d
LEFT JOIN device_history_job dhj ON dhj.id = (
  select id
  from device_history_job
  where d.id = device_id
    and job_type = 'DISCOVERY'
  order by create_time
  desc limit 1)
LEFT JOIN zone z ON z.id = d.zone_id
WHERE dhj.successful = 0
  AND dhj.create_time > UNIX_TIMESTAMP(DATE_ADD(CURDATE(), INTERVAL -7 DAY));
MySQL query for fetching last Discovery jobs within last week if they failed

We have used REPLACE to output the error log on a single line for a more comprehensible way to display results.

Let's name the columns properly and put the query in quotation marks. Now we can feed the query into the mariadb command via --execute option and write the output into a local file:

mariadb --host=127.0.0.1 --user=will --database=unimusmdb --password --execute="SELECT \
  d.id AS \`ID\`, \
  dhj.info AS \`Device info\`, \
  DATE_FORMAT(FROM_UNIXTIME(dhj.create_time), '%H:%i:%s %d.%m.%Y') AS \`Time\`, \
  z.name AS Zone, \
  dhj.job_type AS \`Job type\`, \
  REPLACE(dhj.error_log, '\r\n', ' ') AS \`Error log\` \
FROM device d \
LEFT JOIN device_history_job dhj ON dhj.id = (\
  select id \
  from device_history_job \
  where d.id = device_id \
    and job_type = 'DISCOVERY' \
  order by create_time \
  desc limit 1) \
LEFT JOIN zone z ON z.id = d.zone_id \
WHERE dhj.successful = 0 \
  AND dhj.create_time > UNIX_TIMESTAMP(DATE_ADD(CURDATE(),INTERVAL -7 DAY))" > disco_local.csv
Shell command to query failed Discovery jobs

For failed Backup jobs we would just change the job_type in the WHERE section of the query to 'BACKUP'. Also, since a successful Discovery is a prerequisite for a Backup job, we can select additional columns, like vendor, type and model, to describe the discovered device in more detail. Queries and shell commands for failed backups, along with Postgres and MSSQL versions can be found on our GitHub.

Reviewing the data

We are using OnlyOffice Excel in our scenario because it is free and gets the job done. To view the data simply open the file in your favorite spreadsheets tool and choose 'Tab' as the delimiter:

Tab as delimiter

Alternatively you can import the data using the Data Import feature:

Get data from csv file

Now select the data and format it as a table for advanced filtering capabilities. You can then filter the results by devices, zones or specific keywords in the error log messages. Here is an example of how to filter the logs based on multiple criteria:

0:00
/

Conclusion

And that's basically it! This short guide should provide a starting point for generating and viewing failed job reports from Unimus. There is also a thread going on our Forum for any feedback, questions or possible improvements.

]]>
<![CDATA[ Running Unimus Core in a container on MikroTik RouterOS ]]> https://blog.unimus.net/running-unimus-core-in-a-container-on-mikrotik-routeros/ 64a7f6573f52d200013e410b Thu, 21 Sep 2023 15:55:44 +0000 The purpose of this article is to guide you through configuration for running Unimus remote Core container image on MikroTik's RouterOS.

Introduction

Today we will be discussing an exciting feature of Unimus - support for remote networks and distributed polling. Managed devices do not always have to be directly reachable by the Unimus Server. In a scenario where our devices and Unimus are separated by a WAN it would make sense to utilize a remote agent. All client devices would be polled locally which eliminates the need for individual direct server-device connections. This saves resources such as bandwidth and processing power and simplifies administration as you only need to maintain connectivity to a single host in each remote location. We would also get vastly improved scalability, enhanced security and fault isolation.

Getting to the Core™ of the matter

To extend Unimus functionality to a remote network we would use Unimus Core. A Core is the brains of Unimus. Same as the embedded Core on any Unimus Server it performs network automation, backups, change management and more on managed network devices. Acting as a remote poller, Unimus Core communicates with Unimus Server over a secure TCP connection conforming to modern industry standards. We can install Unimus Core on any supported OS, run a portable version or run a container image. Find out more on our wiki.

Fairly recently (August 2022) MikroTik added container support on their RouterOS. This introduces a nifty new way of deploying Unimus Core directly on an edge router, thus reducing the number of devices required in the network. Let's have a look at how to set this up.

Setup

Behold, the system we will be testing our remote Core deployment on:

Network diagram with Unimus Remote Core container deployed on Mikrotik router

Starting from the right, the Unimus Server is installed on Ubuntu server (22.04) running on Raspberry Pi 2. It is connected via static IP to the HQ router – a MikroTik RouterBOARD. The HQ router is doing source NAT for Unimus Server, translating the private source IP to WAN interface public IP. This allows Unimus Server to reach resources outside the LAN.

HQ router is also configured for destination NAT, a.k.a port triggering, directing incoming TCP 5509 and TCP 8085 traffic to Unimus Server. TCP 5509 allows inbound remote Core connection. TCP 8085 is not strictly required for our demonstration, we open it simply for remote access to the HTTP GUI.


The left side represents a remote network. Branch router, a MikroTik RB5009UG+S+, is our edge router capable of running containers. Connected on LAN side is the device we want to manage, another MikroTik RouterBOARD - Branch switch. Branch router supports containers and will run Unimus Core in one.

Our Unimus Core container will have its own virtual ethernet interface assigned to use for communication outside. Although this 'veth' could be added to the local bridge connecting to Branch switch, it makes more sense security-wise to add it to a separate 'containers' bridge. This way any container traffic goes through the routing engine and firewall, where it can be subject to policies.

Branch router, which is likely source NATting traffic for the whole branch network, needs to also SNAT the container subnet to allow outbound communication to the Unimus Server.


Edge router WAN ports are reachable via internet simulated by a local network. This is sufficient for our testing purposes as it simulates indirect connectivity between the Unimus Server and the remote Core.

With all the moving (really just sitting and humming softly) parts introduced, let’s go through the steps needed to achieve our desired result in detail.

Configuration

The key parts we will be focusing on are Unimus Server and MikroTik RouterBOARD (Branch router) running Unimus Core in a container.

Unimus Server

We assume we already have Unimus Server up and running. If not feel free to check out our wiki to get you started.

Once we are all set, we can begin by navigating to Zones and 'Add new Zone'. This Zone will represent our remote location. Enter Zone name, ID and Remote core connection method and hit 'Confirm' to proceed.

Screenshot for adding a Zone to Unimus

Next, we retrieve the remote core access key by hitting 'Show' and save it for later. It will be used to establish connection to the Unimus Server.

Screenshot of displaying access key for a Remote Core Zone in Unimus

Branch router and Unimus Core

Before attempting to run any containers let's take care of the prerequisites:

RouterOS version 7.5 or higher is needed to run containers so update as necessary. Container package is compatible with arm, arm64 and x86 architectures.

System resource printout from RouterOS CLI

Requirements above are met. We have that going for us which is nice. The following steps will get us through the process of configuring the Branch router:

secure the router as it is dangerous on the internet

Take care of basic security. Configure a strong password and restrict management access using a firewall policy.

get the container package and install it

Visit mikrotik.com/download and get the 'extra packages' for your architecture. Extract the contents, upload the containers package to your router and reboot.

After reboot we can verify the currently installed packages:

System package printout from RouterOS CLI

enable device-mode containers

We need to have physical access to the router for this part due to security implications. We'll be prompted to press the reset button to apply the setting.

/system/device-mode/update container=yes

configure container interface

To allow our container access to the network it needs a virtual interface. First we will create a bridge for the container:

/interface/bridge/add name=containers
/ip/address/add address=10.1.1.1/24 interface=containers

Then we'll create a veth1 interface and assign an IP address that Unimus Core will use for communication to Unimus Server. And we add the interface to the newly created bridge:

/interface/veth/add name=veth1 address=10.1.1.2/24 gateway=10.1.1.1
/interface/bridge/port add bridge=containers interface=veth1

configure NAT

Source NAT is needed for communication outside. We want the connection originated from the container subnet translated to an IP address reachable from the outside:

/ip/firewall/nat/
add action=masquerade chain=srcnat src-address=10.1.1.0/24 out-interface=ether1

use an external drive (optional)

To avoid cluttering your platform storage we recommend using a usb stick or an external hard drive for container images and volumes. They need to be formatted to ext3 or ext4 filesystem:

Winbox screenshot of Disks menu
This author alternates between using the CLI and GUI. Use whichever is more comfortable for you, we neither condone nor discourage from using either one.

Onto the configuration of the container for Unimus Core. We need to specify where to reach the Unimus Server via container environment variables, pull the Unimus Core container image and run it.

define environment variables

Variables are defined in key-value pairs. These are needed to point Unimus Core to the Unimus Server and input the Access Token we got earlier. Additionally, we can set the timezone and memory constraints for Java and there is an option to define volumes' mount points for data persistence. Details on GitHub.

/container/envs/

add key=UNIMUS_SERVER_ADDRESS name=unimuscore_envs value=10.2.3.4
add key=UNIMUS_SERVER_PORT name=unimuscore_envs value=5509
add key=UNIMUS_SERVER_ACCESS_KEY name=unimuscore_envs value=\
    "v3ry_crypto;much_s3cr3t;W0w.."
add key=TZ name=unimuscore_envs value=Europe/Budapest
add key=XMX name=unimuscore_envs value=256M
add key=XMS name=unimuscore_envs value=128M

/container/mounts/

add dst=/etc/unimus-core name=unimuscore_config src=/usb1-part1/config

adding container image

We will be pulling our Unimus Core container latest image straight from Docker hub at https://registry-1.docker.io. You could also import one from a PC (via docker pull/save) or build your own. The remote Core needs to be the same version as the embedded core on Unimus Server to avoid any compatibility issues between versions. So just make sure you grab a suitable version.

/container/config/set
registry-url=https://registry-1.docker.io tmpdir=usb1-part1/pull
/container/add
remote-image=croc/unimus-core-arm64:latest interface=veth1 root-dir=usb1-part1/unimuscore mounts=unimuscore_config envlist=unimuscore_envs logging=yes
- tmpdir specifies where to save the image
- root-dir specifies where to extract the image
- mounts specify mount points for volumes to ensure data persistence if container is removed or replaced
- envlist specifies environment variables defined above
- logging is enabled for troubleshooting
parameters' function explained

After extraction it should go to "stopped" status. Check via:

/container/print
Printout of container section in RouterOS CLI

run it!

All is set to start our remote Unimus Core.

/container/start 0

run on boot (optional)

It might come in handy configuring our container to start with RouterOS boot to add some automation in case the Branch router gets rebooted for any reason.

/container/set start-on-boot=yes 0

All is well that ends well

Assuming we have set it all up and everything went as planned we should see our remote Core’s status as online:

Unimus screenshot of Zone online

Adding our test device (the Branch Switch) under remote Core zone (BO1) prompts a discovery which results in success:

Unimus screenshot of successful job

Troubleshooting

Most common issues relate to Unimus Server connectivity. Here’s a checklist of items we can try in case of need:

  • Unimus Server is UP

Double-check your Unimus Server is up and running. Access it via browser at http(s)://<YourServerIP:8085/

  • Firewall policy

Verify whether there’s a security rule allowing connection from outside your network.

  • Check NAT

Destination NAT rule is necessary for core connection traffic. We need to translate destination address of incoming remote Core traffic to the Unimus Server IP address. TCP port 5509 is used by default.

  • Check variables

Our Unimus Core container uses environment variables to establish connection to the server. Make sure the values in key-value pairs reflect your setup:

UNIMUS_SERVER_ADDRESS is the IP address where Unimus Server is reachable (before NAT)

UNIMUS_SERVER_PORT is the TCP port number (default 5509) on which Unimus listens for remote core messages

UNIMUS_SERVER_ACCESS_KEY is the long string generated when you create a new Remote Core Zone

Enabled container logs make troubleshooting easier:

Log of connection error due to misconfigured port
Log of misconfigured Core connection port
Log of malformed access key
Log of wrong access key
  • Check versions

For Unimus Server to accept the remote Core connection they both need to run on the same version. Unimus Server log file would reveal this issue:

Log of mismatched Core versions

Attached below are the config exports used in our test setup:

# HQ router
/interface bridge
add name=local
/interface bridge port
add bridge=local interface=ether2
/ip address
add address=172.31.254.1/24 interface=local network=172.31.254.0
add address=10.2.3.4/24 comment=internet interface=ether1 network=10.2.3.0
/ip dhcp-server network
add address=172.31.254.0/24 dns-server=172.31.254.1 gateway=172.31.254.1
/ip dns
set servers=10.2.3.254 allow-remote-requests=yes
/ip firewall nat
add action=masquerade chain=srcnat out-interface=ether1
add action=dst-nat chain=dstnat dst-address=10.2.3.4 dst-port=5509,8085 protocol=tcp to-addresses=172.31.254.2
/ip route
add distance=1 gateway=10.2.3.254
/system clock
set time-zone-name=Europe/Bratislava
/system identity
set name=HQ
HQ config
# Branch router
/interface bridge
add name=containers
add name=local
/interface veth
add address=10.1.1.2/24 gateway=10.1.1.1 name=veth1
/container mounts
add dst=/etc/unimus-core name=unimuscore_config src=/usb1-part1/config
/container
add envlist=unimuscore_envs interface=veth1 logging=yes
/container config
set registry-url=https://registry-1.docker.io tmpdir=usb1-part1/pull
/container envs
add key=UNIMUS_SERVER_ADDRESS name=unimuscore_envs value=10.2.3.4
add key=UNIMUS_SERVER_PORT name=unimuscore_envs value=5509
add key=UNIMUS_SERVER_ACCESS_KEY name=unimuscore_envs value="secret key"
add key=TZ name=unimuscore_envs value=Europe/Budapest
add key=XMX name=unimuscore_envs value=256M
add key=XMS name=unimuscore_envs value=128M
/interface bridge port
add bridge=local interface=ether2
add bridge=containers interface=veth1
/ip address
add address=10.8.9.10/24 interface=local network=10.8.9.0
add address=10.5.6.7/24 comment="internet" interface=ether1 network=10.5.6.0
add address=10.1.1.1/24 interface=containers network=10.1.1.0
/ip dns
set servers=10.5.6.254  allow-remote-requests=yes
/ip firewall nat
add action=masquerade chain=srcnat src-address=10.1.1.0/24 out-interface=ether1
/ip route
add disabled=no dst-address=0.0.0.0/0 gateway=10.5.6.254 routing-table=main
/system clock
set time-zone-name=Europe/Bratislava
/system identity
set name="Branch router"
Branch router config

Final words

We hope this guide can serve as a template for deploying Unimus Core container on MikroTik’s RouterOS. If you encounter any difficulties or have additional questions please reach out on the forum.

]]>
<![CDATA[ Backup of devices without CLI with Unimus ]]> https://blog.unimus.net/backing-up-the-unbackupable/ 64ac2ae13f52d200013e422f Fri, 08 Sep 2023 12:03:19 +0000 Today we will have a look at how you can handle devices which don't output their configs over CLI, which is not supported by Unimus out-of-the-box. However, a bit of scripting and Unimus' API enables you to navigate around this adversity.

Intro

Configuration backup is an essential practice in any network environment. Possessing recent configuration files safeguards against data loss or system failure and lets you restore your network to a working state much faster than redeploying everything manually from scratch. Believe me this has saved money (and people's jobs) in the past.

Back on a serious note, as a full-featured Network Configuration Management solution Unimus performs these backup duties for you. Over time, Unimus also builds a versioned configuration history of your network from each device backup and notifies you of any and all changes in your network. With this superior visibility Unimus gives you a whole new level of change management adding on to other user favorite features like configuration auditing and change automation.

How Unimus backs up your devices and when devices make it complicated

Unimus gathers backups from managed devices via CLI as any user would. It logs into the device using Telnet or SSH (please don't use Telnet however) and then retrieves the configuration of the device. And therein lies a potential issue. Not all systems have a CLI, such as MikroTik's SwOS (learn how you can get around it in this article on our blog), and some that have a CLI do not support CLI-based configuration backup. We'll call these the Unbackupables.

To illustrate an example, let's have a look at FortiAuthenticator by FortiNet. As a network element which provides centralized authentication services its configuration consists of both a textual configuration, available over the CLI, and a configuration database, which is managed through a web GUI. The config DB includes users, groups, the FortiToken device list, certificates, and many other config elements. All of them can be neatly packed into a binary backup file. FortiAuthenticator allows you to set up auto-backup. This is done by specifying an FTP server address, FTP directory, backup frequency and backup time.

FortiAuthenticator screenshot of Auto-backup menu
FortiAuthenticator auto-backup config GUI

Another example where CLI doesn't support a config dump is PMP 450 access points by Cambium. Complete configuration can be downloaded via web interface and it will be stored in a text file (.cfg). The backup file can then be passed along to an FTP server using device specific CLI commands.

Cambium PMP web GUI for downloading config file
Cambium PMP 450 configuration file download via GUI

We now have knowledge about particularities of backups in some devices and, as was cleverly foreshadowed above, we can have binary/text backup files pushed to an FTP server (TFTP, SCP or SFTP also works). We can also push backups into Unimus using its API. Let's use all this and a few lines of code to backup even the Unbackupable.

Setup

In real-world deployments, topologies can and will be complex. However with our setup it all boils down to three discrete components - the network device, an FTP server and Unimus, plus a script to tie it all together. Then we automate the process via scheduling. The following section captures our test environment in more detail:

Simple diagram of config push to Unimus

1) A cisco router on the left, will represent our Unbackupable - a network device Unimus cannot directly pull a config backup from. It needs to have the backup delivered to an FTP server. Just a note - we are using a Cisco device just for illustration as Unimus supports all Cisco devices natively, without needing to use an FTP server.

2) An FTP (or TFTP, SCP, SFTP...) server runs on either Linux or Windows Server. The only difference being what script version we use - Shell or Powershell. The script will push backups to Unimus (using the API) from files on the FTP server.

3) Unimus. We assume you already have your Unimus server up and running. If not, we have a guide for how to deploy Unimus on our wiki.

The Script. Simply put, it creates backups on Unimus via API. It is described in detail later.


For our setup to work, device backups need to somehow find themselves on the FTP server. There are two scenarios how this can be achieved:

1) Device itself is capable of a scheduled backup push to an FTP server

Similar to the FortiAuthenticator example, mentioned in the Intro, a device can be set up to push its config backup to an FTP server. Likewise, on Cisco IOS, Embedded Event Manager (EEM) feature enables you to create EEM Applet to execute action "copy config to FTP" when event "scheduled time" is triggered. The following image illustrates this logic:

Diagram 1 of config push to Unimus
(1) Cisco EEM prompts backup push to FTP (2) Server scheduler executes the script (3) The script grabs backups from FTP and (4) pushes them to Unimus.

This is the simpler scenario. Only two jobs need to be scheduled. First one on Cisco router via EEM applet that pushes running configuration to an FTP server at a scheduled time. Example below:

event manager applet backup-config-daily
 event timer cron name daily-time cron-entry "30 3 * * *"
 action 1.0 cli command "enable"
 action 2.0 cli command "copy running-config ftp://username:password@ftp-server-ip/router.ip.address/config-$(date \"+%Y%m%d%H%M%S\").cfg"
Cisco IOS EEM applet for 'config backup to FTP' everyday at 3:30am

Second job is scheduled on the host running FTP server via Cron or Task Scheduler, depending on the environment used.

2) The other option - device has to be periodically instructed to push its backup to an FTP server

Since the device itself is incapable of automatically backing itself up to the FTP server, an outside agent (Unimus) will send commands to the device instructing it to create a config backup and copy it to an FTP server according to a schedule.

Diagram 2 of config push to Unimus
(1) Unimus prompts device to push backup to FTP and (2) device pushes backup to FTP. (3) Server scheduler executes the script (4) Script grabs backups from FTP and (5) pushes them to Unimus.

Although this diagram looks more complicated, again, we only need to schedule two jobs. First one runs on Unimus. It is a Mass Config Push preset following a daily schedule:

Unimus job preset GUI
Unimus Mass Config Push preset for 'config backup to FTP' everyday at 3:30am
tclsh
regexp {\S+ +\S+ +(\w+) +(\d+) +(\d+)} [exec show clock] match month day year
exec copy run ftp://my.ftp.server/router.ip.address/config-$day-$month-$year
Command set from the push job preset

In both scenarios the second job is the execution of script on host running FTP server that pushes backups to Unimus. We will schedule it in Cron or Task Scheduler, depending on the environment.

Pushing the backups from the FTP server to Unimus

Once config backups are copied to the FTP server, be it in binary or text form, our script can push them to Unimus. Backups of the devices will be stored on the FTP server following this specific structure for convenience:

Example directory structure
/ftp_rootdir/<device_ip_address>/config_backup

Each device has a separate directory using its management IP address as name, where all configuration backup versions are stored:

Example config backup files
e.g: /home/ftp_data/10.31.8.3/Backup_10.31.8.3_2023-07-14_10-04.txt

On the host with the FTP server we will be running a script (Linux shell or Powershell) that uploads the config backups to Unimus. Here are the steps it takes:

  1. Comb through each subdirectory under defined root directory extracting the directory name (in form of IP address)
  2. Look up device on Unimus; (optional) work on specified Zone; (also optional) create a new device if none is found
  3. Encode each backup file found in a subdirectory to base64 string digestible by Unimus
  4. Create device backup on Unimus via API from oldest to newest and delete it from ftp directory

Shell script

The full ready-to-use script can be found on our GitHub. If you are using the Powershell script, just skip to the Powershell section. We will focus on the Bash script in this section.

You can just download and use it. If you are curious in its internal functioning, feel free to read on!

Working mainly with APIs we will be using curl and jq packages. Install as necessary.

To start let's define mandatory variables for convenient parametric approach. This is the only part of the script we need to adjust for our environment to make it work. Apart from Unimus hostname/IP address, generated API token and http headers the script only needs to know the FTP root directory where the configs are saved.

# Mandatory parameters
UNIMUS_ADDRESS="<http(s)://unimus.server.address(:port)>"
TOKEN="<api token>"
# FTP root directory
FTP_FOLDER="/home/user/ftp_data/"
Mandatory variables

Next there are some optional parameters that are enabled by uncommenting the variables. From Unimus version 2.4.0-Beta3 we can specify a Zone for the script to work on. We can enable curl insecure option if self-signed certificate is used. Also we have an option to create new devices in Unimus if specific ones have not yet been added.

# Optional parameters
# Specifies the Zone where devices will be searched for by address/hostname
# CASE SENSITIVE; leave commented to use the Default (0) zone
#ZONE="0"
# Insecure mode
# If you are using self-signed certificates you might want to set this to true
SELF_SIGNED_CERT=false
# Variable for enabling creation of new devices in Unimus; set to true to enable
CREATE_DEVICES=false
# Specify description of new devices created in Unimus by the script
CREATED_DESC="The Unbackupable"
Optional variables

Our backup solution is based on API calls to Unimus. Create new device and Get device by address are APIs from APIv2. Both functions createNewDevice and getDeviceId require device IP address as input ($1). GetDeviceId returns device ID value from the response using | jq .data.id. When working with a specific Zone, createNewDevice sends an additional key-value pair in the body and getDeviceId appends zoneId parameter to the URI. Check out the full API documentation on wiki.

function createNewDevice() {
    if [ -z "$ZONE" ]; then
        curl $insecure -X POST -sSL -H "$HEADERS_ACCEPT" -H "$HEADERS_CONTENT_TYPE" -H "$HEADERS_AUTHORIZATION" -d '{"address": "'"$1"'","description":"'"$CREATED_DESC"'"}'\
        "$UNIMUS_ADDRESS/api/v2/devices" > /dev/null
    else
        curl $insecure -X POST -sSL -H "$HEADERS_ACCEPT" -H "$HEADERS_CONTENT_TYPE" -H "$HEADERS_AUTHORIZATION" -d '{"address": "'"$1"'","description":"'"$CREATED_DESC"'", "zoneId": "'"$ZONE"'"}'\
        "$UNIMUS_ADDRESS/api/v2/devices" > /dev/null
    fi
}
Shell function for 'Create new device' API
function getDeviceId() {
    if [ -z "$ZONE" ]; then
        echo "$(curl $insecure -X GET -sSL -H "$HEADERS_ACCEPT" -H "$HEADERS_AUTHORIZATION" "$UNIMUS_ADDRESS/api/v2/devices/findByAddress/$1" | jq .data.id)"
    else
        echo "$(curl $insecure -X GET -sSL -H "$HEADERS_ACCEPT" -H "$HEADERS_AUTHORIZATION" "$UNIMUS_ADDRESS/api/v2/devices/findByAddress/$1?zoneId=$ZONE" | jq .data.id)"
    fi
}
Shell function for 'Get device by address' API

Create new backup API call is done by the createBackup function. It needs device ID ($1) and a JSON ($2) with base64 string of backup file and TEXT/BINARY string as input arguments.

function createBackup() {
    curl $insecure -X POST -sSL -H "$HEADERS_ACCEPT" -H "$HEADERS_CONTENT_TYPE" -H "$HEADERS_AUTHORIZATION" -d "@$2" "$UNIMUS_ADDRESS/api/v2/devices/$1/backups" > /dev/null
}
Shell function for Create new backup API

The functions above will be used in the main function - processFiles.

processFiles function does the following:

  • finds sub-directories inside provided ftp folder
  • retrieves device ID, creates new device in Unimus if none is found
  • sorts files inside each sub-directory by modification date from oldest to newest
  • converts each file to base64 string
  • checks if file is binary or text, creates binary/text backup accordingly and deletes the file
function processFiles() {
    # Set script directory for the script
    script_dir=$(cd "$(dirname "${BASH_SOURCE[0]}")" &>/dev/null && pwd -P)
    cd $script_dir

    # Creating a log file
    log="$script_dir/unbackupables.log"
    printf 'Log File - ' >> $log
    date +"%F %H:%M:%S" >> $log

    # Insecure curl switch
    if $SELF_SIGNED_CERT; then
        insecure="-k"
    fi

    # Perform Unimus health check
    status=$(healthCheck)
    errorCheck "$?" 'Status check failed'

    if [ $status == 'OK' ]; then
        [ -n "$ZONE" ] && zoneCheck
        echoGreen 'Checks OK. Script starting...'
        ftp_directory="$1"
        # Begin sweeping through the specified FTP directory
        for subdir in "$ftp_directory"/*; do
            if [ -d "$subdir" ]; then
                # Interpret directory names as device addresses/host names in Unimus
                address=$(basename "$subdir")
                # Check if device already exists in Unimus
                id=$(getDeviceId "$address")
                if [ $id = "null" ]; then
                    if $CREATE_DEVICES; then
                        createNewDevice "$address" && id=$(getDeviceId "$address") && echoGreen "New device added. Address: $address, id: $id"
                    fi
                fi
                if [ $id = "null" ] || [ -z "$id" ]; then
                    echoYellow "Device $address not found on Unimus. Consider enabling creating devices. Continuing with next device."
                else
                    for file in $(ls -t "$subdir"); do
                        if [ -f "$subdir/$file" ]; then
                            isTextFile=$(file -b "$subdir/$file")
                            if [[ $isTextFile == *"text"* ]]; then
                                bkp_type="TEXT"
                            else
                                bkp_type="BINARY"
                            fi
                            encoded_backup=$(base64 -w 0 "$subdir/$file")
                            temp_json_file=$(mktemp)

                            cat <<-EOF > "$temp_json_file"
                            {
                            "backup": "$encoded_backup",
                            "type": "$bkp_type"
                            }
EOF
                            # Use jq to process the JSON from the temporary file
                            jq '.' "$temp_json_file" > output.json
                            createBackup "$id" "output.json" && echoGreen "Pushed $bkp_type backup for device $address from file $file"
                            # Clean up the temporary files & backup file
                            rm "$temp_json_file" output.json "$subdir/$file"
                        fi
                    done
                fi
            fi
        done
    else
        if [ -z $status ]; then
            echoRed 'Unable to connect to unimus server'
            exit 2
        else
            echoRed "Unimus server status: $status"
        fi
    fi
    echoGreen 'Script finished'
}
Main shell function

Function is called supplying ftp root folder defined in the beginning:

processFiles $FTP_FOLDER

As we mentioned earlier, you can get the script in its wholesomeness on GitHub.


Powershell script

Corresponding code walkthrough for Windows environments using Powershell can be found in this section. Grab the complete script from GitHub or continue perusing if you're interested in how it works under the hood!

The only part that we need to update for a specific setup is the mandatory variables. Required are Unimus hostname/IP address, API token generated in Unimus and the location of FTP root directory.

# Mandatory parameters
$UNIMUS_ADDRESS = "<http(s)://unimus.server.address(:port)>"
$TOKEN = "<api token>"
# FTP root directory
$FTP_FOLDER = "/ftp_data"
Mandatory variables

Optionally we can specify the Zone we want to work on. This works from 2.4.0-Beta3. Next we can enable skipping certification check for when you are using self-signed certificate on Unimus. The script handles this in a separate fashion based on the version of Powershell present on the system. Also, variable $CREATE_DEVICES controls adding devices to Unimus if they're not present already.

# Optional parameters
# Specifies the Zone where devices will be searched for by address/hostname
# CASE SENSITIVE; leave commented to use the Default (0) zone
#$ZONE="0"
# Insecure mode
# If you are using self-signed certificates you might want to set this to true
$INSECURE = $false
# Variable for enabling creation of new devices in Unimus; set to true to enable
$CREATE_DEVICES = $false
# Specify description of new devices created in Unimus by the script
$CREATED_DESC = "Unbackupable"
Optional variables

Create new device and Get device by address API endpoints are accessed using functions Create-NewDevice and Get-DeviceId. Device IP address ($address) is used as a parameter for both functions. Get-DeviceId returns .data.id value (device ID) from API response. Get-DeviceId is also handling response exceptions by returning null when a device doesn't exist on Unimus. When working on non-default Zone Create-NewDevice adds zoneId key-value pair to its data payload and Get-DeviceId appends zoneId to the URI. Full API documentation can be found on wiki.

function Create-NewDevice {
    param(
        [string]$address
    )

    $body = @{
        address = $address
        description = $CREATED_DESC
    }

    if ($ZONE) {
        $body["zoneId"] = $ZONE
    }

    $body = $body | ConvertTo-Json

    $headers = @{
        "Accept" = "application/json"
        "Content-Type" = "application/json"
        "Authorization" = "Bearer $TOKEN"
    }

    if ($INSECURE -and $psMajorVersion -ge 6) {
        Invoke-RestMethod -SkipCertificateCheck -Uri "$UNIMUS_ADDRESS/api/v2/devices" -Method POST -Headers $headers -Body $body | Out-Null
    } else {
        Invoke-RestMethod -Uri "$UNIMUS_ADDRESS/api/v2/devices" -Method POST -Headers $headers -Body $body | Out-Null
    }
}
Powershell function for Create new device API
function Get-DeviceId {
    param(
        [string]$address
    )

    $headers = @{
        "Accept" = "application/json"
        "Authorization" = "Bearer $TOKEN"
    }

    if ($ZONE) {
        $uri="api/v2/devices/findByAddress/" + $address + "?zoneId=" + $ZONE
    } else {
        $uri="api/v2/devices/findByAddress/" + $address
    }

    try {
        if ($INSECURE -and $psMajorVersion -ge 6) {
            $response = Invoke-RestMethod -SkipCertificateCheck -Uri "$UNIMUS_ADDRESS/$uri" -Method GET -Headers $headers
        } else {
            $response = Invoke-RestMethod -Uri "$UNIMUS_ADDRESS/$uri" -Method GET -Headers $headers
        }
        return $response.data.id
    }
    catch {
        if ($_.Exception.Response.StatusCode -eq 404) {
            return "null"
        }
    }
}
Powershell function for Get device by address API

Create new backup API is called through Create-Backup function. It is using device ID($id), base64 string of the backup file ($encodedBackup) and TEXT/BINARY ($type) as input.

function Create-Backup {
    param(
        [string]$id,
        [string]$encodedBackup,
        [string]$type
    )

    $body = @{
        backup = $encodedBackup
        type = $type
    } | ConvertTo-Json

    $headers = @{
        "Accept" = "application/json"
        "Content-Type" = "application/json"
        "Authorization" = "Bearer $TOKEN"
    }
    if ($INSECURE -and $psMajorVersion -ge 6) {
        Invoke-RestMethod -SkipCertificateCheck -Uri "$UNIMUS_ADDRESS/api/v2/devices/$id/backups" -Method POST -Headers $headers -Body $body | Out-Null
    } else {
        Invoke-RestMethod -Uri "$UNIMUS_ADDRESS/api/v2/devices/$id/backups" -Method POST -Headers $headers -Body $body | Out-Null
    }
}
Powershell function for Create new backup API

The main function Process-Files executes the following logic:

  • finds subdirectories inside provided ftp folder
  • retrieves device ID, creates new device in Unimus if none is found
  • sorts files inside each subdirectory by modification date from oldest to newest
  • converts each file to base64 string
  • checks if file is binary or text, creates binary/text backup accordingly and deletes the file
function Process-Files {
    param(
        [string]$directory
    )

    $log = Join-Path $PSScriptRoot "unbackupablesPS.log"
    $logMessage = "Log File - " + (Get-Date -Format "yyyy-MM-dd HH:mm:ss")
    Add-Content -Path $log -Value $logMessage
    #Health check
    $status = Health-Check

    if ($status -eq 'OK') {
        if ($ZONE) {
            Zone-Check
        }

        Print-Green "Checks OK. Script starting..."
        $ftpSubdirs = Get-ChildItem -Path $directory -Directory

        foreach ($subdir in $ftpSubdirs) {
            $address = $subdir.Name
            # Check if device already exists in Unimus
            $id = "null"; $id = Get-DeviceId $address

            if ($id -eq "null" -and $CREATE_DEVICES) {
                Create-NewDevice $address
                $id = Get-DeviceId $address
                Print-Green ("New device added. Address: " + $address + ", id: " + $id)
            }

            if ($id -eq "null" -or $id -eq $null) {
                Print-Yellow ("Device " + $address + " not found on Unimus. Consider enabling creating devices. Continuing with next device.")
            } else {
                $files = Get-ChildItem -Path $subdir.FullName | Sort-Object -Property LastWriteTime -Descending

                foreach ($file in $files) {
                    if ($file.GetType() -eq [System.IO.FileInfo]) {
                        $content = [System.IO.File]::ReadAllBytes($file.Fullname)
                        $encodedBackup = [System.Convert]::ToBase64String($content)

                        if ($content -contains 0) {
                            $bkp_type = "BINARY"
                        } else {
                            $bkp_type = "TEXT"
                        }
                        Create-Backup $id $encodedBackup $bkp_type
                        Print-Green ("Pushed " + $bkp_type + " backup for device " + $address + " from file " + $($file.Name))
                        Remove-Item $file.FullName
                    }
                }
            }
        }
    } else {
        Print-Red "Unimus server status: $status"
    }
    Print-Green "Script finished."
}
Main powershell function

To run, call the main function via:

Process-Files -directory $FTPFOLDER

That's all she wrote! Here's a GitHub link with the full version of the script.

Scheduling with Cron

This section takes care of the automation part of the whole config push process. On Linux we can run our script at a predefined time using Cron. Add user-specific job via crontab -e and insert the following line:

0 4 * * * /path/to/your_Unbackupables_script.sh
Crontab entry to run the script at 4:00am every day

Task Scheduler

All you gamer folk working in Windows Server environment can use Task Scheduler to run the Powershell version of the script. These are the steps needed:

1) Under Actions hit 'Create task' and provide a name and description.

Task scheduler menu for creating a task

2) Switch to Triggers tab, create a 'New' trigger and specify the start time and frequency of your choice.

Task scheduler menu for adding new trigger

3) In Actions tab hit 'New'. Select 'Start a program' as action, browse for PowerShell executable and add an argument by supplying the path to the Powershell script.

Task scheduler menu for creating an action

After confirming we can see our new task 'push backups to Unimus' in Task Scheduler Library. It is now ready to automatically run according to our specified schedule.

Task scheduler menu for running tasks

Backup push aftermath in Unimus

After the script runs we should see new backups created in Unimus:

Unimus GUI displaying text backups
API created TEXT backups
Unimus GUI displaying binary backups
API created BINARY backups

Note that new device backups may show almost identical timestamps. This could happen when the script makes multiple API calls due to a collection of config files accumulated inside a single device folder on the FTP server. Likely scenario is an initial script execution when we start with a considerable backup history that we want pushed to Unimus from FTP. Or when the script is not executed periodically and configs on FTP stack up. If your backup push script runs with (slightly after) your 'copy backup to FTP' schedule, this should not happen. Even if it does the backups on Unimus would be created with preserved chronology thanks to the script's internal logic.

T-shooting

Things sometimes work out on the first try. More often tho, they don't. If your case is "they don't", keep on reading.

There are a few possible sources of headache - FTP directory structure, Unimus API, working Zone selection and using self-signed certificate being the likely candidates. Let's have a closer look at each.

Directory structure nerve wreckers

Successful implementation of the backup handling in our scripts heavily relies on directory structure of your FTP server. Or on how robust the script is (we did our best). In our setup each managed device has its own folder named after device IP address or host name. Each folder in turn contains backup file versions for a given device. Make sure your setup keeps this structure.

API mishaps

There should not be any but if issues with API calls arise one can check the following.

Every API request includes Authorization header that supplies 'API token', in order to authorize the request, following this scheme:

Authorization: Bearer <token>

To create a token log in to your Unimus instance and navigate to User management > API tokens. Make sure you copy the whole string to the mandatory variable 'TOKEN' value. Otherwise you might get the following error:

ERROR: Unimus server status: null

The script is designed to push backups to a specific Zone in Unimus, default one if no Zone is specified. If the specified Zone does not exist you will get an error:

ERROR: Error. Zone <ZoneID> not found!

Just make sure to specify an existing Zone ID. Zone ID is case sensitive.

Self-signed cert predicaments

If you encounter errors with the script it pays to double-check optional parameters. Self-signed certificate on Unimus might give you SSL related errors if issued by a certificate authority unknown/untrusted to the system you're executing the script from. Enabling 'certification check skip' in the script is an option to consider.

Final words

We hope you will find this tutorial helpful for integrating configuration backups to Unimus from devices that do not support backups from CLI. We appreciate any feedback, questions or possible improvements and have a thread on our Forum for just that purpose.

]]>
<![CDATA[ Using Active Directory and LDAP for AAA in Unimus ]]> https://blog.unimus.net/using-active-directory-and-ldap-for-aaa-in-unimus/ 640257128618b40001ed59e1 Mon, 03 Apr 2023 19:26:31 +0000 In this guide, we would like to show you how to pair your Active Directory with Unimus and use LDAP for the AAA.

We will be focusing on AD running on Windows Server, and we will assume you have your server already set up with Active Directory.

Preparing for LDAP - Organizational structure in Active Directory

As LDAP is integrated and available in Active Directory by default, let's start by briefly showcasing the simple, exemplary organizational structure we created in Active Directory Users and Computers:

We created a simple structure of a couple of child Organizational Units (OUs) under the unimus.local domain, more specifically unimus.local > Unimus > Unimus Admins.

Let's take another look at them in the ADSI Edit utility, where we can have a better view of their Distinguished Names (DNs):

ADSI Edit gives us a more useful (for us in this context) view of our organizational structure. We can see how each object chains and contributes to the DN of our target OU Unimus Admins.

OU=Unimus Admins,OU=Unimus,DC=unimus,DC=local

This DN will serve as a Base DN in Unimus for the lookup of the objects (users) based on an User Identifier attribute we will choose. As per Unimus' LDAP documentation (more on this later), we need to specify a Base DN which will serve as the root point for Unimus to search users from.

The last thing to look at is one of our users. Let's pick Jane Doe as an example and see how this user's DN looks:

CN=Jane Doe,OU=Unimus Admins,OU=Unimus,DC=unimus,DC=local

This is Jane Doe's DN; however, we don't have to use this full DN to identify Jane Doe. We can choose whichever of the available attributes of this account to use - assuming these attributes exist and do not have blank values. By right-clicking Jane Doe's record and choosing Properties, we can see the Attribute Editor, where we can examine and edit existing attributes or add new values to other unused attributes:

In this view, we can sort values so that we see all non-blank attributes together. In typical Windows AD deployments, you usually use the sAMAccountName attribute to lookup AD users instead of the DN. This is typically the username for Windows AD logins, so we will use this for the configuration in Unimus as well.

LDAP integration in Unimus

Before we jump into the configuration section, let's start with some details on how LDAP auth works in Unimus. The process of authenticating a local user in Unimus against an external LDAP is done in two stages.

1) In the first stage, the Unimus LDAP client logs into the LDAP directory using the provided server details and accesses user credentials (DN and password). If the login is successful, Unimus will search for the provided user (the user attempting to log into Unimus) in the target directory tree defined by a base DN. If the user is matched using any configurable attribute (the User Identifier), Unimus proceeds to the second stage.

2) In the second stage, Unimus uses the previously matched provided username to retrieve the full user DN and attempts to authenticate the user with the provided password. If the authentication succeeds, the user is logged into Unimus.

For more details about authentication, security options, and OpenLDAP / AD configuration examples, we recommend taking a minute to check out our Wiki article, which goes over all of the above and more: https://wiki.unimus.net/display/UNPUB/LDAP+Auth

The LDAP Access User

As mentioned above, we need an Access User which will be used to find the exact account trying to log in to Unimus. The Access User can be any LDAP account that has access (read-only access is sufficient) to the LDAP tree under which your login users exist. More info in the Wiki articled mentioned above.

The Access User of course also needs to be able to "see" the user account objects as well, not just the OUs in your directory.

Unimus configuration - Configuring LDAP

You can configure LDAP in User management > LDAP configuration:

Where:

LDAP server address - the address of your domain controller.

LDAP port - the default port is 389.

LDAP access user DN - full DN of the user able to access AD records.

LDAP access password - a password for the access user.

Security - an optional security measure for LDAP transmission. Both LDAPS and StartTLS require respective server-side (the LDAP server) configurations.

Note, if you opt to use LDAPS or StartTLS including certificate validation, you can follow our guide on importing CA certificates in Unimus; both require importing your CA to the system's Java KeyStore. You don't need to do this if you enable Do not check certificate to skip LDAP cert validation.

LDAP base DN - the DN of the root of the tree storing user records, which Unimus will be authenticating login attempts against.

User identifier - an identifier that Unimus uses to match users in LDAP to retrieve user's full DN for auth purposes. You can use any attribute that suits your needs. For example, you can use CN, sAMAccountName, uid, etc.

LDAP filter - an optional LDAP filter to further filter matches in the given base DN. Standard LDAP filter syntax is supported.

Unimus configuration - Adding LDAP accounts to Unimus

Unimus currently requires every external user account to have a matching account in Unimus. You can create a user in User management > Users:

Where:

Username - the matching username for the external account in Active Directory.

Authentication method - LDAP.

Select access role - select which role the user will represent.

At this point, users should be able to log into Unimus using LDAP and their external account credentials.

Troubleshooting

There are generally two troubleshooting sources for any issue with LDAP AAA. Unimus log and AD debug logging on Windows Server. We actually recommend focusing on Unimus log, as it features full LDAP error codes without any additional configuration, which is not the case with AD debug logging (which requires manually specifying logging levels for all AD components).

Let's take a closer look at a couple of exemplary error messages you could encounter and their likely cause:

Ldap authentication failed: '10.20.30.40:389; nested exception is javax.naming.CommunicationException: 10.20.30.40:389 [Root exception is java.net.ConnectException: Connection refused (Connection refused)]'

This error indicates a wrong configuration (wrongly specified IP and / or port) or a network issue (likely a firewall).

Ldap authentication failed: '10.20.30.40:636; nested exception is javax.naming.CommunicationException: 10.20.30.40:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host terminated the handshake]'

This error indicates an issue with LDAPS or StartTLS configuration, e.g. Unimus has LDAPS or StartTLS checked, but the server doesn't use either or is not set up for either, or the server is missing an SSL certificate for the LDAP server. In the latter case, refer to the section above where we mention what is required for a certificate to be recognized.

Ldap authentication failed: '[LDAP: error code 49 - 80090308: LdapErr: DSID-0C09041C, comment: AcceptSecurityContext error, data 52e, v4563]

This error indicates incorrect credentials. These can indicate any combination of wrong credentials (DN or password) for the access user or a wrong password provided for the Unimus user to be authenticated.

Ldap authentication failed: 'Incorrect result size: expected 1, actual 0'

This error indicates the access to LDAP was successful, but given the base DN and input in the username field, no match for the given user was found. Possible causes include:

  • Wrong username input and / or the user was not found when searching under the Base DN. Check the username and / or the Base DN.
  • The User Identifier is incorrectly configured. Check if the identifier is correct for your LDAP objects.
  • The provided base DN does not contain a user identified by the configured User Identifier (similar to the 2 points above). Check if the user you want to auth has the identifier you are using.
  • An LDAP filter specifying conditions the user does not comply with. Check if your filter properly returns the user you want to auth.

Final words

Hopefully this article can guide you through connecting Unimus with Active Directory via LDAP. If you have any questions, or you run into any issues, please feel free to post in the Support section of our forums, or contact us through our usual support channels.

]]>
<![CDATA[ Unimus & NetXMS - how to monitor and trigger Unimus jobs in NetXMS ]]> https://blog.unimus.net/unimus-netxms-how-to-monitor-and-trigger-unimus-jobs-in-netxms/ 6425c8cf14e70400018249f6 Thu, 30 Mar 2023 20:07:37 +0000 In this guide we will look at how to monitor Unimus device job results in NetXMS, and how to create NetXMS alarms if a device doesn't have a valid backup within a specified timeframe. We will also explore how to run Unimus jobs directly from NetXMS.

Unimus & NetXMS integration

Running Unimus together with NetXMS creates a powerful ecosystem for both monitoring (NMS) and config management (NCM) of your infrastructure. You can integrate both solutions together - use data from NetXMS in Unimus, and vice-versa.

Unimus has a built-in NetXMS importer which allows Unimus to adopt nodes from NetXMS. This allows you to automate device ingestion in your network - both when you are first deploying the systems, as well when you add devices into your network. There is no need to add a device to Unimus and NetXMS separately, just add your device to NetXMS, and Unimus will automatically adopt it. You can read more about how to configure this on our Wiki.

In this article however, we want to look at how to integrate data from Unimus into NetXMS - how to see Unimus job statuses and device data in NetXMS, and how to trigger Unimus jobs directly from NetXMS.

Preparations

There are a few steps to do before we get started:

1) Enable Web Service Proxy in the NetXMS Agent of the node polling your network.

If you are not using Zones in NetXMS, this will be your NetXMS server. If you are using Zones, you will need to enable this on the Zone's proxy.

To enable the Web Services Proxy, we edit the Agent's config:

And add the appropriate setting EnableWebServiceProxy = yes in the config:

You need to restart the NetXMS Agent for this to take effect.

2) Create scripts required for Web Service Definitions, DCIs and Object Tools.

After the Web Service Proxy is enabled, we need to create a few scripts that other components in NetXMS will use. These scripts provide your Unimus server address the Unimus API token to DCIs and Object Tools which will later use them.

We have prepared an export of these scripts for you, available in the NetXMS Configuration Exports GitHub repo. You can simply download the export and import it in your NetXMS using Tools > Import Configuration...

We will import 4 scripts:

You will need to open the Unimus::getServerAddress script, and change the returned value to the URL of your Unimus Server.

You will also need to change the API Token in the Unimus::getApiToken script. You can generate an API token in Unimus in the User management > API tokens. Click the clipboard button to copy then token, and paste it as the return string in the NetXMS script.

3) Create Web Service Definitions.

Web Service Definitions are also available in the NetXMS Configuration Exports GitHub repo. After you import them, you should be able to see them in Configuration > Web Service Definitions:

Monitoring Unimus jobs in NetXMS

After the preparations above are finished, we can now import templates that use the Web Service Definitions to pull data from Unimus into NetXMS.

The NetXMS Configuration Exports GitHub repo contains a template export, which when exported creates 2 templates:

We have not included any Auto-Apply Rules in these templates, as each NetXMS deploy is usually quite unique in how nodes are structured. You can apply your own Auto-Apply Rules if you wish. For this article, we will just bind nodes to this template manually.

You should bind your networking devices to the Unimus device data template, and the Unimus Server node to the Unimus server status template. For the devices, you should see multiple DCIs created:

There is a threshold on the Unimus device last backup time DCI, which creates an alarm if a backup was not successful for the device in the last 3 days. You can change this threshold in the DCI config if you wish:

Triggering Unimus jobs from NetXMS

Next step is to create Object Tools which allow you to trigger Unimus jobs directly from NetXMS. The NetXMS Configuration Exports GitHub repo contains Object Tools, which should look like this when imported:

You should see the tools on your nodes under Commands:

If you wish, you can create Filters to make sure these tools are only available on Unimus-managed nodes. We have not included any due to the same reason as the Auto-Apply Rules on the templates.

Final words

Hopefully this article can serve as an example on how to integrate Unimus and NetXMS together. If you have any questions, or you run into any issues, please feel free to post in the Support section of our forums, or contact us through our usual support channels.

]]>
<![CDATA[ Release Overview - Unimus 2.3.0 ]]> https://blog.unimus.net/release-overview-unimus-2-3-0/ 63e1b7e98618b40001ed588c Wed, 15 Feb 2023 18:54:52 +0000 2.3.0 is the latest major Unimus release. With 120+ lines in the Changelog, this article hopes to provide a short overview of the major features and other new additions in this release.

The full Changelog is also present at the bottom of this article - if you would like to see everything that this release contains.


LDAP authentication support

The most expected feature in this release is support for native LDAP authentication. LDAP has been requested by many users from the community and we are happy to report it's now here!

The LDAP connector was designed to be fully configurable and to support both OpenLDAP and Microsoft Active Directory. Examples on how to configure both are available on our Wiki. Please check the full documentation on our Wiki for more info.


MS SQL database support

Another often requested feature implemented in this release is support for using a Microsoft SQL Server database. During the Deploy Wizard, you can now select MSSQL as your database. After you finish the Wizard, everything should work as expected.

Support for MSSQL brings the total database support in Unimus up to 5 different DB engines (HSQL, MySQL, MariaDB, PostgreSQL and MSSQL). We hope this offers enough flexibility to deploy Unimus in just about any environment.


"Offline Mode" (support for air-gapped networks)

Unimus offline mode

Last year, we announced that we will be bringing support for Offline Mode to Unimus. Until today, Unimus required a check with our Licensing Server to function. Starting with 2.3.0, full air-gapped deployment of Unimus is possible.

With Offline Mode, Unimus can now be deployed in highly-secured environments where complete outside connectivity blocking is required.

Please note the Offline Mode is only available to customers with the Unlimited License (more info here). If you are interested in using Offline Mode, just contact our Support.


Config Search Export and Send functions

Results of Config Search can now be exported! This is very useful when you need to present a report for a security audit, to management, or use the search results for processing in a different system.

The export format, as well as the contents are fully configurable. You can export the search results in a nice looking HTML document with full search information, or only export the search results themselves in YAML for further machine processing.

We hope this feature makes your reporting duties a bit easier :)


Other minor new features

On top of the major features shown above, there are many other minor features, improvements, and UI / UX updates. As with every release, we also added support for many new devices types. This time around, drivers for 28 new device types were added.

For the full list of new features (and supported devices), please see the Changelog below.


Bug fixes and security fixes

As with every release, a sizable list of fixes for various bugs and issues is present. One of the things of note are the fixes for many edge-cases where jobs (Discovery / Backups / Push) could fail on various older networking devices.

There are also a few security issues fixed in this release. In particular, our MySQL DB driver library was updated due to multiple fixed vulnerabilities reported in its older versions.


Finally, here is the full Changelog for 2.3.0. As this is a major release, the Changelog is quite long. But if you want to see all the changes in this release, please read on:

= Version 2.3.0 =
Features:
  Added device UUIDs in APIv2 (all "/devices" endpoints)
  The "Default" Zone will now be marked as "Default" when renamed
  Added support for recognizing Observium devices IDs in Observium NMS Sync
  Improved built-in backup filters for Siklu devices
  Incremental performance improvements across many parts of the system
  Added support for acknowledging login prompts in keyboard-interactive mode during SSH login
  Added retrieval of backup from Fiberhome devices in configure mode if not available in enable mode
  Improved device CLI mode switching and mode detection during discovery
  Added support for prompt format changing when switching contexts on Cisco ASA (multi-context)
  Added support for Configure Mode on Sonicwall NSA
  Added handling which improves backup formatting on Cambium cnMatrix switches (removes double lining)

  Added "Offline Mode" (support for air-gapped networks):
    - Unimus can be now switched to full offline mode, which removes the necessity to contact our Licensing Server
    - Offline Mode licenses are only available to users with an Unlimited License subscription
    - please contact us to request an Offline Mode license

  Added support for LDAP authentication:
    - LDAP can now be used as an external authentication provider
    - full support for configuring custom user search DN and specifying username LDAP attributes
    - tested on both OpenLDAP as well as Microsoft Active Directory
    - full documentation: https://wiki.unimus.net/display/UNPUB/LDAP+Auth

  Added support for MS SQL:
    - we have added support for Microsoft SQL Server as an officially supported DB engine
    - the Deploy Wizard will allow you to select MSSQL during deployment
    - to migrate to MSSQL, you will need to setup a new Unimus deploy, data migration is currently not supported

  Added Config Search Export and Send functionality:
    - you can now export (download) or directly send Config Search results
    - support for exporting in both HTML and YAML format
    - configurable export formatting (header, search criteria, etc.) or just results

  Added options to specify which SSH cryptography options Unimus supports:
    - in some environments, it may be desired to disable support for weaker SSH crypto
    - full documentation: https://wiki.unimus.net/display/UNPUB/Supported+SSH+cryptography

  Added support for:
    - Accedian AMO series
    - ADVA LX series console servers
    - Arris C4 series chassis
    - BDCOM OLTs
    - Additional Brocade NOS device models
    - CheckPoint Gaia devices
    - CheckPoint Security Gateway
    - CheckPoint Security Management Server
    - CheckPoint SMB Gateway
    - CheckPoint VSX
    - Additional Ciena SAOS device models
    - Dasan OLTs
    - Entrasys switches (A4 / B2 series)
    - Extreme Wing APs in cluster mode / virtual controller mode
    - Extreme WLC
    - Fortinet FortiAuthenticator
    - Metaswitch Perimeta SBCs
    - NetApp switches
    - Nokia OLTs (FX-8)
    - MRV LX series console servers
    - Opengear Infrastructure Manager devices
    - Opengear Resilience Gateway (ACM)
    - Pulse Secure Virtual Traffic Manager
    - Ribbon (ECI) Apollo
    - Securepoint UTM
    - SNR (NAG) Switches
    - YunKe switches
    - Zyxel GS19xx series switches
    - Zyxel ATP

Fixes:
  Fixed backup retention would not work on specific MySQL Server versions
  Fixed Inverted Config Search would not work on specific PostgreSQL versions
  Fixed diff visualization would incorrectly show new empty lines when large delete sections were followed by a new addition
  Fixed first failed job on a newly added device would not set its Last Job Status to failed
  Fixed disabled retention jobs would still show up in "Schedules > Show scheduled jobs" window
  Fixed API v2 get backups by device id and latest backups by device id not working
  Fixed API (of the local instance) denying all requests when connection to Licensing Server was down
  Fixed API v3 Push Jobs search not working on PostgreSQL
  Fixed possible deletion attempt on an already deleted object comment which would result in errors
  Fixed Per-Tag Connector config updates not being propagated between concurrent users (live updates were missing)
  Fixed "Schedules" table updates not being propagated between concurrent users (live updates were missing)
  Fixed "Config Search > Show all lines" does not work if Context lines is set to a negative value
  Fixed moving devices between Zones would not trigger needed rediscovery in specific cases
  Fixed moving devices between Zones would trigger unneeded rediscovery in specific cases
  Fixed incorrect "Currently running Scans" count if a Network Scan preset was deleted while it was running
  Fixed "Devices > Last Job Status" could be incorrect if running a job with all Connectors disabled
  Fixed multiple minor UI / UX issues and UI element state and alignment issues
  Fixed SSH connections failing to PanOS devices when login acknowledgement prompts were enabled
  Fixed backup not working on specific Fiberhome devices
  Fixed backup and Config Push could fail on some Positron GAM devices
  Fixed backup not working on Cisco FXOS devices in cluster mode
  Fixed Cisco SX devices could contain backup command echo as part of the backup
  Fixed Exablaze Fusion devices could contain backup command echo as part of the backup
  Fixed discovery failing on specific Aruba ArubaOS / HP(E) ProCurve devices
  Fixed discovery failing on specific Brocade NOS devices
  Fixed discovery failing on specific Ciena SAOS devices
  Fixed discovery failing on DCN devices with newer firmwares (after rebranding to YunKe)
  Fixed discovery failing on netElastic vBNG
  Fixed discovery failing on Dell OS10 switches if they output a Bell before the prompt
  Fixed discovery failing on Extreme VX devices (VX9000)
  Fixed discovery failing on Opengear devices when using the "root" user
  Fixed discovery failing on newer versions of OPNsense
  Fixed discovery failing on Fiberstore S5850 (and related devices) with newer firmwares
  Fixed discovery failing on specific Nokia / Vecima OLT devices
  Fixed discovery failing on multi-context Cisco ASA with different prompt in different contexts
  Fixed discovery could fail on devices which use pagination in very specific cases
  Fixed discovery not falling back to Telnet after IO errors occurred on the SSH connection

  Fixed SSH connections failing to servers which did not support higher MAC segment size:
     - affected devices usually had very old firmwares with weak SSH MAC support
     - example of affected devices: Dell PowerConnect 55xx, some versions of Cisco SF/SG switches, etc.

Security fixes:
  Updated MySQL Connector due to multiple published vulnerabilities in older versions
  Fixed currently opened "Devices > Tags" window still working if user lost access to the device
  Fixed currently opened "Devices > Comments" window still working if user lost access to the device
  Users which did not have full access to a Config Push preset could still delete the preset in its context menu

Embedded Core version:
  2.3.0

Known issues:
  ISSUE: "Re-discover affected devices when Ports or Connectors change" Advanced Settings option does not work
  WORKAROUND: none
  STATUS: issue scheduled for fixing

  ISSUE: Some screens in Unimus show time in server's time zone, others in client's (browser's) time zone
  WORKAROUND: none, issue only relevant if client has different time zone than server
  STATUS: we are debating on how to fix this - will likely create a setting to select which TZ should be used
]]>
<![CDATA[ Update on Unimus security - 2022 edition ]]> https://blog.unimus.net/update-on-unimus-security-2022-edition/ 62335abde683410001ceabcb Tue, 28 Jun 2022 12:55:44 +0000 Early last year we wrote our first report on the security of Unimus releases and Unimus' code-base (available here). The report was prompted by the "SolarWinds incident", and the following questions from our user-base (you) on what the state of Unimus security was. Since last year however, many things have happened in the security industry as a whole (log4j anyone?), and we have also been working hard to improve the security of Unimus specifically as well.

We think now (since we just released the results of Unimus pentests) is the right time to do a "state of Unimus security" update for 2022.

New Security Hub

We have created a new Security Hub.

This is hosted on a completely separate server without any links to our other infrastructure to avoid the possibility of tampering with the data in case our other infrastructure components were compromised.

For now, you can find the hashes of all current production binaries, as well as instructions on how to check the hashes and code signatures of all our binaries. Security-related documents are also hosted there - you can find the results of the mentioned pentests there as well. We will be adding more to the Security Hub in the coming months.

Full offline / airgapped mode for Unimus

We are officially announcing support for full offline mode - support for running Unimus in air-gapped networks.

Even tho we have always tried to be as transparent with what Unimus sends to our licensing server and we support proxying the licensing communication, we understand that having a hard requirement on outbound connectivity from Unimus to our licensing server can be a security issue in sensitive environments.

Implementing support for full offline is a large amount of work, but we want to bring this into Unimus before the end of this year (2022). The offline mode will be available to any customer on the Unlimited License tier once ready.

Penetration testing

Like we mentioned at the start of the article - we published the results of penetration tests of the Unimus API and the web GUI earlier this week. This was the culmination of our security focus over the last year. In the lead-up to the pentests, we did multiple internal rounds of reviews and improvements to security of both our infrastructure, the build pipeline (CI/CD) and the codebase itself.

You can check out the full pentest results here. As a short summary - we are very happy with the results. A single major issue has been identified which was fixed in the 2.2.3 release, with the rest being only lower severity findings.

Code signing

In last year's report we stated that we wanted to introduce code-signing across the entire Unimus binary ecosystem. I am glad to report that this has been done, and since release 2.1.0 (August 2021), all our release binaries are fully code-signed. The Security Hub shows the commands you can use to validate the signatures.

As a side-note - on Windows you can just right-click the .exe files, and you will find the full signing chain in the "Properties" of the .exes.

Bug bounty / Security bounty program

Another area we pointed to in last year's report was the establishment of an official Security Bounty program. We worked on this over the last year, but sadly due to circumstances outside of our control, an official bounty program is not yet ready. So while we don't have any news on this front for now, we are still committed to find a way to make this happen. As soon as any progress is made, we will keep you informed.

Our infrastructure, CI/CD systems and the Unimus build pipeline

Other than the directly visible public efforts mentioned above, we have also been putting a lot of time into our internal infrastructure, our CI/CD systems and our software build pipelines.

There has been progress on many fronts in these categories, from technology (work on internal SSO systems to make sure all access controls and account management is in a single place); to monitoring, reviews and audits of our systems (periodic reviews of our infrastructure, monitoring for IoC (indicators of compromise), work on SIEM, etc.); to process improvements (our onboarding and offboarding processes to assure no accounts are left open that should be closed) - all the way to the build process of Unimus itself (we have for example implemented policies which assure that vulnerable software components (like log4j) can not be a part of our software).

The above is not a complete list at all, so if you are interested in any particular area from this section, let us know and we will gladly provide more details.

The outro

With the pentests (and all the work preceding them) now behind us, the largest time investments into security that were needed are now done. Going forward, we will however be keeping increased attention on security, and continue to promote security internally as one of the most important aspects of our software.

If you have any questions and / or comments, please post in the topic corresponding to this article on our forums.

]]>
<![CDATA[ Unimus security - penetration test report 2022 ]]> https://blog.unimus.net/unimus-security-penetration-test-report-2022/ 62335ad0e683410001ceabcf Wed, 22 Jun 2022 14:41:28 +0000 Over the last year we have been putting increasing focus and dev time on security in Unimus. This culminated with a full pentest of the Unimus API and the Unimus web GUI a couple of months ago. In this post, we want to share the pentest results with you - as per our full transparency policy.

Unimus API pentest result summary: download here
Unimus web GUI pentest result summary: download here

A single high severity issue has been identified during the pentest, and has already been fixed in the 2.2.3 release, available for download here.

Overview

In summary, we are quite happy with the outcome, and the state of security in Unimus. With only a single high severity issue discovered (which has already been fixed), and a few other lower severity issues present, this is a very positive result.

With Unimus being an on-premise application - where each customer's (your) instance and data is completely separate from others and you are in complete control of access to your instance; any issues and their potential impact are greatly mitigated as well.

Let's look at the discovered results in more details...

High severity issues

A1) Post-auth stored XSS/HTML injection allows JavaScript code execution in the Unimus web GUI

Starting with the biggest issue - post-auth XSS injection in the Unimus GUI. An authenticated attacker could inject XSS code in multiple places in Unimus. This is definitely a major issue, however Unimus being on-premise, and injection being possible only from a properly authenticated account are mitigating factors.

As we mentioned above, this has already been fixed in the 2.2.3 release, and we highly recommend all customers upgrade to this release.

Medium severity issues

There are a few medium severity issues in both the API and the web GUI. We plan to tackle these issues one-by-one over the rest of the year. Here are our notes on the individual issues.

B1) Insecure Direct Object Reference (IDOR) in the Unimus API

This simply means that the API (APIv2 specifically) uses database IDs as object identifiers in API calls. We have already migrated to using UUIDs in APIv3, so this issue will be completely gone once APIv2 is fully replaced by APIv3.

B2) No expiration on JWT tokens for the Unimus API

All API tokens currently have infinite lifetime. We will introduce a new option to limit lifetime of API tokens (you will still have the option to create "infinite" tokens if you wish).

This will be a new optional checkbox during API token creation, which will allow you to set a "Lifetime" for a token.

In special cases, the session auth cookie could be stolen - and allow the attacker to hijack the authenticated user's session. This is difficult to pull off, and requires access to the user's local PC / cookie storage. As such, this is not high severity - since if the attacker has full access to the user's PC, all bets are off anyway.

There are some technical challenges in our web GUI framework which make this a bit difficult to fix - but we will be put time into research later in the year to see if we can improve our session handling.

B4) Response time based account enumeration allows to find valid application login names in the Unimus web GUI

This is a fun one - an attacker can differentiate between valid and invalid usernames based on the time it takes Unimus to return a failed auth response. This is because for invalid usernames, cryptography (hashing) on the provided password is not performed, and as such invalid username handling is faster.

In theory, an attacker can fire tons of auth requests with random usernames, and the "slow" (in relative terms) auth failures are likely to be valid usernames.

Here are examples of how this looks:

Login Is valid? Response time (ms)
Unimus Yes 60
Aa Yes 57
Bb Yes 55
Cc Yes 55
WrongUser No 26
BadUser No 31
Invalid No 31
Attacker No 30

As you can see, there is still jitter, but a pattern is definitely distinguishable. Unimus being on-premise is again a huge mitigation factor here however - as you are in full control over who can access your instance. Using external auth (like Radius or LDAP) will also make this attack unfeasible.

We are currently debating on how to approach this, as solutions for this are not as straight-forward as they might seem.

Low severity issues

C1) No Function Limiting in the Unimus API

This simply means that there is no request rate limiting in the Unimus API. If you want to implement request limits, we would highly recommend doing so on a front-end proxy server (such as an NGINX server acting as a reverse proxy).

C2) Invalid Credential UUID Accepted For Delete in the Unimus API

The APIv3 allows you to call delete methods with any strings in the "uuid" field and will return a HTTP 200 response. If the submitted value is not a valid and existing UUID, nothing will however happen. We will implement input validation and return a proper error HTTP response code if the submitted value is not in the UUID format.

C3) Missing Lock Out in the Unimus API

This is related to C1 - in that the Unimus API will allow any number of requests without throttling, whether the submitted request does or does not have a valid token.

As with C1, this can be taken care of on the front-end proxy server (such as an NGINX server acting as a reverse proxy).

C4) No account lock out policy allows password guessing attacks in the Unimus web GUI

The Unimus web GUI allows an unlimited number of auth attempts. We are conflicted on whether we should add a lockout mechanism - since Unimus is on-premise and you have control over who can access your Unimus instance. We know login lockouts can cause a lot of frustration...

Either way, Unimus does already have notifications and logging on invalid login attempts. We highly recommend either setting up alarms on logs, or enabling our built-in failed login notifications, so you are informed if someone is trying to break in to your Unimus instance.

The outro

Like we mentioned in the Overview, we are happy that our focus on security over the last year lead to good pentest results. Going forward, we plan to resolve the outstanding issues as described above for each, and we are going to keep treating security as one of the fundamental priorities in Unimus.

If you are interested in full pentest reports rather than the summaries (or you need them for compliance), please feel free to reach out to us.

We hope this was an interesting read, should you have any questions, please feel free to post in the Support section of our forums.

]]>
<![CDATA[ Using Windows Server NPS for AAA in Unimus ]]> https://blog.unimus.net/using-nps-for-aaa-in-unimus/ 6228bfafe683410001cea9db Tue, 24 May 2022 14:58:49 +0000 In this guide, we would like to show how Microsoft's Network Policy Server, or NPS for short, can be configured to act as a RADIUS server to handle AAA for Unimus. By using NPS, you can use your Windows domain (Active Directory) credentials to login to Unimus.

In this guide we assume you are already running Windows Server, Active Directory, and have installed the NPS server. If you don't have an NPS server installed yet, you can do so by navigating to Add roles and features > Role-based or feature-based installation > Select your machine > Network Policy and Access Services. Follow the wizard and confirm the dependencies it will list.

Preparing for NPS - Windows Server 2019 users

If you are running Windows Server 2019 you will need to look at one current bug, which directly touches NPS, and causes the traffic to be dropped at the firewall level despite the default port rules set up by NPS. You can fix the issue by opening the Command Prompt and running this command:

sc sidtype IAS unrestricted

Please restart the server after you make the change.

You can read more about this issue here or here.

Preparing for NPS - Users and Groups in Active Directory

To start, create a user group. This user group will be used as a condition for a network policy in NPS to authenticate users later:

Next, create a user. This will be a user we want to grant access to Unimus:

Then add the user(s) to the group you created earlier:

Preparing for NPS - Authentication methods

Before we start configuring NPS, we need to decide on the auth protocol we will use. Unimus currently supports two authentication methods, PAP and CHAP.

PAP (Password Authentication Protocol)

PAP is, by all means, an insecure protocol. When using PAP, the password is sent hashed using the shared secret between the RADIUS client (Unimus) and RADIUS server (NPS). This exposes the passwords to a risk, since anyone with the secret could reverse the password hashing and access passwords in plaintext.

Usually, we don't recommend using PAP. However, PAP has an advantage when compared to CHAP - the password on each end can be stored encrypted using any method. Since it can be encrypted in storage, the password is much more immune to leakage when then password storage is compromised.

In short, PAP encrypts passwords in storage, but transfers them as a cleartext hash over the network.

CHAP (Challenge-Handshake Authentication Protocol)

CHAP is a more secure method, which does not transfer passwords via the network at all. Instead, when a link is established between the RADIUS client (Unimus) and the RADIUS server (NPS), the server responds with a challenge (a salt - a random string). The client uses hashing for the password and the challenge and sends the hash via network to the server. This is then used for authentication on that end.

One disadvantage of CHAP is that both the password(s) must be stored in an unencrypted or reversibly encrypted format. This exposes password storage to a potential risk if the database (AD) is compromised. Healthy Active Directory security practices are recommended.

In short, CHAP transfers passwords over the network securely, however passwords must be stored in cleartext (or using reversible encryption) in storage.

What to do if you wish to use CHAP

If you choose to use CHAP, then before proceeding we need to prepare accounts to be usable with CHAP. As in the section above, we will be making changes to Active Directory - User and Groups. You will need to enable reversible encryption for the user's password and reset the password afterwards:

NPS Configuration

Register the NPS server in Active Directory

To use NPS, it needs to be registered as an Active Directory service .To do so, right-click the NPS (Local) entry in the top-left corner of the NPS window and click on Register server in Active Directory.

Add Unimus as a RADIUS client

Next we need to create Unimus as an NPS RADIUS Client:

Don't forget to copy the generated secret and paste it somewhere. This secret will be used in the RADIUS configuration in Unimus.

Configure NPS Accounting

Configuring accounting on NPS is also extremely helpful:

You can have NPS log into a database or a text file. If you prefer the database, you can connect it with your MSSQL database. Otherwise, logging to a file is the easier choice. You can also choose which type of records you want to log.

Create Network Policy for Unimus

To use Unimus with NPS, we need to create a policy:

Let's run down through some of the sections when setting up a Network Policy:

Conditions - in this guide, we are using the membership in a user group as a condition for granting access. However, it is up to you if you require or wish to create more conditions or use different conditions to authenticate accounts.

Access Permission - unless you are setting up some specific network policies, we recommend creating a single network policy with Grant access permission.

Authentication Methods - here, we want to disable MS-CHAP v1 and v2 methods, which are not currently supported, and choose the PAP or CHAP method. Refer to the section above to decide which method is ideal or preferred for your use case.

Settings - this section defines settings which can be used upon access being granted. There is no requirement for Unimus to be defined here.

This is all that is needed to configure the NPS server to authenticate logins from Unimus. No extra configuration should be required and NPS should be ready at this point.

Unimus configuration - Configuring RADIUS

You can configure RADIUS in User management > Radius configuration:

Where:

Radius server address - the address of the Windows Server machine running the NPS server.

Authentication port - the default port, unless changed, is 1812.

Authentication protocol - choose one you chose when creating the Network Policy in NPS.

Accounting port - the default port, unless changed, is 1813.

Radius access secret - the secret you generated when adding Unimus as the RADIUS client in the NPS.

You can also test if your configuration works by clicking Show test. You don't need to have a user account added to Unimus to test this. This test should pass if you follow this guide. If it doesn't, we have some troubleshooting steps for you at the end of this guide.

Unimus configuration - Adding RADIUS accounts to Unimus

Unimus currently requires every external user account to have a matching account in Unimus. You can create a user in User management > Users:

Where:

Username - the matching username as the external account in Active Directory.

Authentication method - Radius.

Select access role - select which role the user will represent.

At this point, users should be able to log into Unimus using RADIUS and their external account credentials.

Troubleshooting

In our experience, the most useful information when troubleshooting is directly on the Windows Server. Whether checking the Event Viewer or the NPS accounting log. Here's how.

Event Viewer

The Event Viewer provides a lot of clarity for possible events and errors. Every entry includes a message with a clear description of the problem and an Event ID, which can be used to search for possible solutions to Windows generic issues. Here are some examples of messages you may encounter:

Log Level Event ID Message
Info 6273 Network Policy Server denied access to a user.
Info 6272 Network Policy Server granted access to a user.
Error 13 A RADIUS message was received from the invalid RADIUS client IP address 10.1.100.240.

Fortunately, these messages are specific, and in the Details tab you can see even more details, including NPS's internal error interpretation. For example, this way you can see more details on events when users weren't granted access, like in the case of the first one above. The fields you are interested in are ReasonCode and Reason, and you can see information like this:

ReasonCode: 19
Reason: The user could not be authenticated using Challenge Handshake Authentication Protocol (CHAP). A reversibly encrypted password does not exist for this user account. To ensure that reversibly encrypted passwords are enabled, check either the domain password policy or the password settings on the user account.

This error for example, tells us the user's password is not stored in a reversibly encrypted format, which we can then check in the user's account. If the user already has the support checked, it is very likely the password wasn't reset after the change.

NPS Accounting log

While most of the information about a potential problem can be gathered and interpreted by the Even Viewer in a better human-readable form, the most important information is also included in the accounting log:

<Event>
    <Timestamp data_type="4">03/11/2022 11:06:45.206</Timestamp>
    <Computer-Name data_type="1">WIN-P3URXOXOR1T</Computer-Name>
    <Event-Source data_type="1">IAS</Event-Source>
    <Class data_type="1">311 1 10.100.1.111 03/03/2022 16:57:07 31</Class>
    <Authentication-Type data_type="0">2</Authentication-Type>
    <Fully-Qualifed-User-Name data_type="1">UNIMUS\unimusadmin</Fully-Qualifed-User-Name>
    <Client-IP-Address data_type="3">10.9.21.123</Client-IP-Address>
    <Client-Vendor data_type="0">0</Client-Vendor>
    <Client-Friendly-Name data_type="1">Unimus</Client-Friendly-Name>
    <Proxy-Policy-Name data_type="1">Use Windows authentication for all users</Proxy-Policy-Name>
    <Provider-Type data_type="0">1</Provider-Type>
    <SAM-Account-Name data_type="1">UNIMUS\unimusadmin</SAM-Account-Name>
    <Packet-Type data_type="0">3</Packet-Type>
    <Reason-Code data_type="0">19</Reason-Code>
</Event>

Final words

Hopefully this article can guide you through connecting Unimus with NPS. If you have any questions, or you run into any issues, please feel free to post in the Support section of our forums, or contact us through our usual support channels.

]]>
<![CDATA[ Automating MikroTik SwOS backups with Unimus ]]> https://blog.unimus.net/automating-mikrotik-swos-with-unimus-a-how-to-guide/ 62027d54e683410001cea097 Tue, 03 May 2022 19:39:22 +0000 MikroTik SwitchOS, unlike its RouterOS sibling, doesn't have a CLI interface over SSH nor Telnet. In this article, we look at how to pull backups from SwOS through its HTTP(S) interface into Unimus.

While we are planning to add native support for HTTP(S) connectors into Unimus, we would like to showcase that even though native support for HTTP(S)-only devices is still not ready, Unimus is equipped with tools to allow users to push backups from such devices into Unimus.

A few months ago we published a guide on how to push backups from FRR (FRRouting) into Unimus. In this article, we will look into MikroTik SwOS, downloading a backup via its HTTP(S) interface, and getting these backups to Unimus. Let us show you how below.

STEP 1 - Adding MikroTik SwOS devices to Unimus and generating an API token

As a first step, we want to prepare things in Unimus for our new devices and also generate an API token to be able to submit API calls and upload our backups. Let's start with the API token:

The next step is to add our SwOS devices into Unimus and give them an identical description so that the script is able to fetch all the devices by the Description field. Lastly, we want to set them as Unmanaged devices, so Unimus will not try to run Discovery (nor Backups) on these devices, as that would fail.

To do so, you can use any of the methods you already know, whether by adding them manually to the list of devices or by importing them in bulk. However, we recommend the latter. In the example below I imported an CSV file with five devices with a predefined comment SwOS which will be imported as a device's description.

After importing the devices, we need to set them as Unmanaged. Note that you cannot edit the Description field in bulk, so you can see why the CSV import is the recommended method. It will save you a lot of time.

STEP 2 - preparing a backup and uploading it into Unimus

If you haven't had a chance to familiarize yourself with the Unimus API, we recommend checking out our introduction to the Unimus API in our FRRouting article here, which includes more details on what we will be doing here as well.

When we looked into options to upload a backup of FRR, we showed you both ways of uploading a text or a binary file. In this article we will focus only on the latter option as the SwOS backup is not human-readable and, as such, it doesn't benefit us to upload it as a text.

Binary backup means that we will be uploading a file, instead of just text. In this case, it will be SwOS's backup file backup - .swb.

The script we use below also requires information which you will need to fill in before running it. These information include:

  • Unimus address
  • Unimus port
  • API token you created in the first step
  • Device description string used to filter devices in Unimus - this string should be unique to just SwOS devices
  • Device username
  • Device password
#!/bin/bash

#Basic settings
unimusaddress=
unimusport=
apitoken=
devicedescription=
deviceuser=
devicepass=

#Set a working directory in the current directory
cd "${0%/*}"

#Get devices with a given description and filter out a list of IPs
getdevices=$(curl -s -H "Accept: application/json" -H "Authorization: Bearer $apitoken" "http://$unimusaddress:$unimusport/api/v2/devices/findByDescription/$devicedescription")
deviceiplist=$(echo $getdevices | grep -E -o "(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)")

for i in $deviceiplist; do
    echo $i
    #Download the backup from the device
    wget -t 1 -T 10 -q --user $deviceuser --password $devicepass http://$i/backup.swb && echo "Download finished."

    #If the downloaded backup is not empty, upload it to Unimus. Otherwise, skip it.
    if [ -s backup.swb ]; then
        #Get the current device's ID necessary for the upload
        getdeviceiddirty=$(curl -s -H "Accept: application/json" -H "Authorization: Bearer $apitoken" "http://$unimusaddress:$unimusport/api/v2/devices/findByAddress/$i")
        if [[ $getdeviceiddirty =~ \"id\":([[:digit:]]+?), ]]; then
            deviceid=${BASH_REMATCH[1]}
        fi

        #Encode the backup for transport using BASE64
        encodedbackup=$(base64 -w 0 backup.swb)

        #Upload the backup to Unimus
        curl -s -H "Accept: application/json" -H "Content-type: application/json" -H "Authorization: Bearer $apitoken" -d '{"backup":"'"$encodedbackup"'","type":"BINARY"}' "http://$unimusaddress:$unimusport/api/v2/devices/$deviceid/backups" -o /dev/null && echo -e "Upload finished.\n"
    else
        if [[ -f backup.swb ]]; then
            echo -e "Downloaded backup from device $i is empty. Skipping the file...\n"
        else
            echo -e "Backup from $i could not be retrieved. Skipping the file...\n"
            continue
        fi
    fi
    #Cleanup
    rm backup.swb
done

Here is a breakdown of the workflow of this script:

  • First, the script will use the Unimus API to fetch a list of devices (searching by the device description) and prepare a list of IPs.
  • Then, the script works through each IP within a loop.
  • In the loop, the script will download a backup and check if the backup file was successfully downloaded and if it contains data.
  • If it does, the script will use the Unimus API again to fetch the device ID for the particular IP, encode the backup, and upload it to Unimus.
  • If the file is not downloaded (e.g., the device is offline) or the downloaded file is empty, it will handle both cases separately and return the appropriate message to inform the user if such a case pops up.
  • Lastly, the script cleans up the backup file and repeats with the next IP.

This is an example of what your output may look like after the script is finished:

10.10.10.1
Download finished. 
Upload finished.

10.10.10.2
Unable to retrieve backup from 10.10.10.2. Skipping the file...

10.10.10.3
Download finished. 
Upload finished.

10.10.10.4
Download finished. 
Upload finished.

10.10.10.5
Downloaded backup from device 10.10.10.5 is empty. Skipping the file...

Now, let's check Unimus and see if we got our backup and what we can do with it:

As you can see, Unimus received our backup, but compared to a text backup, we cannot see contents of this file. This is because it is a binary file and it could be in any format - .tar.gz, .bin, .zip, etc. We can still download or send it.

If there is a change to this binary file we will see it correctly reflected in Unimus creating a new change point and if we were to check Diff, we would see the changed SHA1 sum.

Note, when downloading a binary backup, Unimus will not append any extension to this file, we recommend renaming it right away.

STEP 3 - job scheduling

One-time execution of scripts is nice, however just as with any backup job, we want it to be run automatically and periodically.

The last part of this article will be adding a scheduled job to Cron. Depending on your software and/or user privileges you have, you might need to change the way to set up a cron job, e.g. via user-specific jobs using crontab -e. Since with our setup we need root privileges for accessing cron configuration files, we will add a job to /etc/crontab directly, and set our script to run every night at 3AM (just like our default schedule in Unimus).

0 3 * * *    root    /root/swos_backup.sh

And that's it! We have created an automated way to generate and upload backups into Unimus for MikroTik SwOS devices.

Final words

We hope this article can serve as a template that can be used to upload any files / backups into Unimus. If you have any questions, or run into any issues using the examples in this article, please feel free to post in this topic in the Automation section of our Forums.

]]>
<![CDATA[ Unimus Core HA deploy - a how-to guide ]]> https://blog.unimus.net/unimus-core-ha-deploy/ 622680dde683410001cea8b3 Tue, 08 Mar 2022 01:58:37 +0000 We often get asked how to deploy the Unimus Core in a high availability scenario. While Unimus can natively handle multiple Cores attempting to connect and become the active poller for a single Zone by dropping an incoming Core connection if another Core is already active, this is not an ideal solution in large-scale deploys. In this article we will explore setting up a clustered Unimus Core deploy using Corosync and Pacemaker.

Here is a high-level diagram of how our example setup looks like:

Using clustering, only one of the Cores will ever be active - this is an active / passive HA scenario. If the active Core fails for any reasons, Pacemaker will failover the service to another available cluster member.

For the sake of simplicity we will be deploying a 2-node cluster in this example.

Components of the cluster

Components of our clustering solution:

  • Linux - our base operating system that our cluster nodes run.
  • Corosync - Provides cluster node membership and status information. Notifies of nodes joining/leaving cluster and provides quorum.
  • Pacemaker - Cluster resource manager (CRM). Uses the information from Corosync to manage cluster resources and their availability.
  • pcs - A helper utility that interfaces with Corosync (corosync.conf) and Pacemaker (cib.xml) to manage a cluster.
  • Unimus Core - Our service we want to have highly available.

We will use pcs to manage the cluster. pcs is a cluster manager helper that we can use as a single frontend for the setup and management of our cluster. Without pcs you would need to setup Corosync manually through the corosync.conf config file, and manage Pacemaker configuration through its crm utility.

Deploying Corosync / Pacemaker without pcs is absolutely possible, but for the sake of simplicity we will rely on pcs to setup Corosyn and Pacemaker for us.

Preparations

The example commands below were tested on Ubuntu 18, but the setup should be very similar on any other Linux distro. We assume you are starting with a clean Linux system. As such, we need to prepare both our cluster members by running these commands:

# run everything as root
sudo su

# update
apt-get update && apt-get upgrade -y

# install dependencies
apt-get install -y \
  wget \
  curl \
  corosync \
  pacemaker \
  pcs

# install Unimus Core in unattended mode
wget https://unimus.net/install-unimus-core.sh && \
  chmod +x install-unimus-core.sh && \
  ./install-unimus-core.sh -u

# setup Unimus Core config file
cat <<- "EOF" > /etc/unimus-core/unimus-core.properties
  unimus.address = your_server_address_here
  unimus.port = 5509
  unimus.access.key = your_access_key
  logging.file.count = 9
  logging.file.size = 50
EOF

Next up, we need to setup a single user that will be the same across all cluster nodes. This user will be used by pcs to kickstart our cluster setup. pcs already creates a hacluster user during its installation, so we will just change that user's credentials:

CLUSTER_PWD="please_insert_strong_password_here"
echo "hacluster:$CLUSTER_PWD" | chpasswd

After we have a common user across our cluster nodes, pick one node from which we will control the cluster. We can run these commands to setup the cluster:

CLUSTER_PWD="please_insert_strong_password_here"

# setup cluster
pcs cluster auth test-core1.net.internal test-core2.net.internal -u hacluster -p "$CLUSTER_PWD" --force
pcs cluster setup --name unimus_core_cluster test-core1.net.internal test-core2.net.internal --force

# start cluster
pcs cluster enable --all
pcs cluster start --all

Since we are using a 2-node cluster in this example, we need to set a few other specific properties. First, we disable quorum, as achieving a quorum with 2 nodes is impossible. We also disable fencing.

pcs property set no-quorum-policy=ignore
pcs property set stonith-enabled=false

Our cluster setup should now be done, so lets check our cluster status:

pcs property list
pcs status

You should see both your cluster nodes online, like this:

root@test-core1:~# pcs status
Cluster name: unimus_core_cluster
Stack: corosync
Current DC: test-core1 (version 1.1.18-2b07d5c5a9) - partition with quorum
Last updated: Tue Mar  4 01:08:51 2022
Last change: Tue Mar  4 01:04:49 2022 by hacluster via crmd on test-core1

2 nodes configured
0 resources configured

Online: [ test-core1 test-core2 ]

No resources


Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
root@test-core1:~# 

Troubleshooting

If you don't see your cluster members online, or pcs status complains about some issues, here are a few common pitfalls:

  • Your cluster nodes should NOT be behind NAT (this is possible, but requires more config not covered in this guide).
  • You must use hostnames / FQDNs for cluster nodes. Using IPs is a no-go. If needed, create hostnames for cluster nodes in /etc/hosts.
  • The hostname / FQDN you used resolves to 127.0.0.1, or a different loopback. This is also a no-go as Corosync / Pacemaker require that the hostnames / FQDNs used for clustering resolve to actual cluster member IPs.

In general, most of these issues can be resolved by proper DNS setup, or by creating proper records in /etc/hosts.

Creating a cluster resource

Now that our cluster is up, we can tell Pacemaker to start managing the Unimus Core service as a clustered service.

First we however need to disable Unimus Core from starting automatically at system startup on each node:

# disable Core autostart, Pacemaker will control this
systemctl stop unimus-core
systemctl disable unimus-core

Then we can create our cluster resource through pcs on one of our cluster nodes:

# we might want to set node as ineligible to run the service if it fails to start
pcs resource defaults migration-threshold=1

# setup our cluster resource
pcs resource create unimus_core systemd:unimus-core op start timeout="30s" op monitor interval="10s"

You will notice we use systemctl, and also declared the cluster resource using the systemd resource agent. We do this because Ubuntu 18 (which we are showcasing this setup on) uses systemd. If you are running a distro which doesn't use systemd as its init system, you will need to do things differently.

We recommend checking out Pacemaker documentation on available resource agents and how to use them.

Monitoring cluster resources

Now that our cluster resource is created, lets check if it works:

pcs status resources

You should see that the Core is running on one of the nodes. Here is how our output looks:

root@test-core1:~# pcs status resources
 unimus_core	(systemd:unimus-core):	Started test-core1
root@test-core1:~# 

You can also check the status of the unimus-core service on both of your cluster nodes:

# on core1
root@test-core1:~# systemctl status unimus-core
● unimus-core.service - Cluster Controlled unimus-core
   Loaded: loaded (/etc/systemd/system/unimus-core.service; disabled; vendor preset: enabled)
  Drop-In: /run/systemd/system/unimus-core.service.d
           └─50-pacemaker.conf
   Active: active (running)
...
root@test-core1:~# 

# on core2
root@test-core2:~# systemctl status unimus-core
● unimus-core.service - Unimus Remote Core
   Loaded: loaded (/etc/systemd/system/unimus-core.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
root@test-core2:~# 

You should also see the Core connect to your Unimus server, and the Zone should be ONLINE.

Live monitoring of cluster status

To monitor the cluster status live, you can run the crm_mon in the live / interactive mode (just run the crm_mon command) and see the Core service fail over to a 2nd node on failure.

Simulating a failure

You can easily simulate a failure in many ways. You can reboot one of your cluster members, and you should see the failover occur. You should also see the Zone briefly go OFFLINE in Unimus and then back ONLINE. You can also simulate a failure on one of the cluster nodes by running:

crm_resource --resource unimus_core --force-stop

You should see that Core started on the other node:

root@test-core1:~# pcs status resources
 unimus_core	(systemd:unimus-core):	Started test-core2
root@test-core1:~# 

For the original node (test-core1 in our case) to be considered as a viable node to run our resource, we need to run:

pcs resource cleanup unimus-core

If you want to migrate the service back to the first node, you can run:

# force a move to another cluster member
crm_resource --resource unimus_core --move

# clear any resource constraints we created
crm_resource --resource unimus_core --clear

A move may create a constraint to not place the service on the previous node in the future. This is why we clear all constraints after a mode. A useful command to check existing constraints on our cluster resource is:

crm_resource --resource unimus_core --constraints

Final words

Hopefully this article can guide you in creating a HA setup for your Unimus Cores. If you have any questions, or you run into any issues, please feel free to post in the Support section of our forums, or contact us through our usual support channels.

]]>
<![CDATA[ Release Overview - Unimus 2.2.0 ]]> https://blog.unimus.net/release-overview-unimus-2-2-0/ 621543b2e683410001cea727 Fri, 25 Feb 2022 18:57:06 +0000 With each new release, we also upload a release overview video, so if you prefer a video format, you can find it here:
Youtube - 2.2.0 Release Overview video

For those who prefer readable content, read on!


Device Variables for Config Push

The biggest new feature of this release are the Device Variables. These allow you to inject per-device unique values into generic Config Push presets. This can be used to create Push presets which are generalized and pushed to a large set of devices, but by using variable substitution the data pushed to each device will be tailored to that device.

Basically, Push presets can now behave more like templates, which was previously not as easily possible. More info on variables on our Wiki.


New APIv3

We are releasing the first wave of APIv3 endpoint groups in 2.2.0. In this release, you can now use APIv3 to manage:

  • CLI mode change passwords
  • Credentials
  • Jobs
  • Tags
  • Zones

Plans are to continue implementing the remaining endpoint groups in point releases after 2.2.0. We are focusing on covering what isn't available in APIv2 first, and after we have API feature parity with the GUI, we will start adding endpoints into APIv3 which are currently covered in v2.


Mass Config Push available over the API

The addition of "Jobs" API endpoints is of significant note, as these allow you to use Config Push over the API.

If you want to build your own custom network automation front-end, Unimus can now be used as its back-end. This means you don't need to deal with all the intricacies of device communication which Unimus already handles, and allows you to leverage our existing 240+ network device drivers from your own applications.


Improvements to API Token management

As a part of API work, we have also substantionally improved API Token management. Tokens can have descriptions and comments now, and new security controls were introduced to specify if tokens should have access to device Credentials and CLI mode change passwords.


Performance improvements

We also put a lot of effort into performance improvements in this release, with a target to support 120.000 devices inside a single Unimus instance. This required significant work across all subsystems in Unimus.

Some highlights of performance differences between 2.1.4 vs. 2.2.0:

  • Job initialization time on 120k devices down from 9 minutes to 1.5 minutes
  • Average MikroTik RouterOS job duration down from 21 seconds to 9 seconds
  • UI component responsiveness massively improved - for example, select all on 120k entities in the UI down from previously 3 minutes to 8 seconds now
  • All UI components, screens and tables load data in under 10 seconds (average UI load at ~2 seconds, with 10s being the worst result) with 120k devices in the system
  • Full discovery + backup on 120k devices in 2 hours 45 minutes (using 600 concurrent jobs)

We will be publishing a technical blog article with details on performance improvements, as well as another blog article on large-scale Unimus deploy performance tuning in the near future.


Discovery algorithm improvements

During Discovery, Unimus tests and validates which Credentials are available for your devices. Our "Credential Binding" feature serves as a method to prevent Unimus from testing all available credentials on devices, allowing you to set specific credentials that should be used for device communication.

As a part of our work on performance we also optimized device connections during Discovery. If only a single credential is available for a device (either due to Credential Binding, or simply just having one credential in the system), Unimus will now only perform a single connection to the device during Discovery. Previously multiple connection attempts would be performed in this scenario, as the Discovery flow for single and multi-credential discoveries was the same. This has now been optimized.

In case of using SSH, this can result in significant CPU utilization savings during device jobs, as SSH session establishment is computationally expensive. More info on the new behavior on our Wiki.


Security improvements

Another part of Unimus that received heavy focus durign 2.2.0 development was security. We performed an internal security audit of Unimus in advance of full Penetration Testing.

We have found and fixed multiple security-related issues of various severity in this release - please check the full Changelog for more info.

Unimus will undergo a full Penetration Testing cycle during March 2022. We will publish the pentest report publicaly on our Blog - stay tuned.


Other minor new features

In addition to the major features and changes above, this release also brings a bunch of smaller changes and improvements. We added an option to set the UI session timeout, added support for NetXMS v4, RouterOS v7, and many other minor improvements.

Please check the full Changelog below for more information.


Bug fixes and security fixes

As mentioned in a previous section, security and stability were a large time-investment on our end during the development cycle of this release. In addition to security, we have also fixed a slew of bugs, issues and UI inconsistencies of various severity.

All together 33 various bugs, and 20 various security-related issues were fixed. Please check the full Changelog below for full details.


As with each new release, we also added support for a bunch of new networking vendors and devices. In 2.2.0 we are adding support for 12 new device types, from 9 separate networking vendors.

The Changelog for 2.2.0 is quite long, as this is one of our largest releases to date. If you want to see all the changes in this release, please check the full Changelog below:

= Version 2.2.0 =
Features:
  Added option to set UI session timeout (example "-Dserver.servlet.session.timeout=1h")
  Updated NetXMS client library to latest version (4.0.2156)
  Added additional built-in Backup Filters for FortiOS devices
  Added missing search in Config Mode Password binding window (Devices > Edit)
  Unmanaged devices are now displayed with Italic font in "Backups" screen (same as in "Devices")
  Added support for device selection menus on Cisco IOS
  Added support for CLI sections in FortiOS
  Improved Huawei VRP driver compatibility
  Improved detection and grouping of invalid commands in Config Push
  Reordered buttons on the Devices screen into logical groups (better UX)

  New Device Variables feature for Config Push
    - Variables can be defined for devices in the Device screen
    - both single and multi device variables edit are supported
    - Variables can be used in Config Push in the "${variable_name}" format
    - more info: https://wiki.unimus.net/display/UNPUB/Device+Variables

  Added new APIv3:
    - implemented new v3 API, exposing functionality currently missing in APIv2
    - currently available endpoints: "Jobs", "Zones", "Tags", "Credentials", "CliModeChangePasswords"
    - API tokens now have a new "Allow access to credentials" checkbox
    - please check http(s)://your_unimus_address/api/v3/ui for new built-in API docs
    - APIv2 will remain functional for the foreseeable future

  Improvements to API Token management:
    - added "Description" to API tokens
    - API tokens now have a new "Allow access to credentials" checkbox
    - added an "Edit" button for API tokens

  Mass Config Push is now available over APIv3:
    - added an "API Jobs" tab to Config Push if any API jobs exist
    - new retention settings for API Push Job history
    - see above section for details on APIv3

  Performance improvements:
    - general improvements across the application due to DB structure and data access improvements
    - substantial performance improvements in high-concurrency environments due to JDBC datasource change
    - Config Search has been offloaded to the database (as required per DB engine), bringing much better performance
    - optimized job initialization time (10x faster when running jobs on 5.000 devices)
    - a single Unimus instance can now handle 120.000 devices with full discovery + backup on 120k devices in 2 hours 45 minutes
    - UI component responsiveness massively improved (for example, select all on 120k objects in the UI now takes 8 seconds, from 3 minutes previously)
    - with 120.000 devices in Unimus, all screens now load in under 10 seconds max (average screen load at 2 seconds)

  Security improvements:
    - performed an internal security audit of Unimus in advance of full Penetration Testing
    - more info on found and fixed issues in the "Security fixes" section
    - updated user password hashing algorithm to Argon2 (previously Bcrypt2 was used)
    - existing user passwords will be migrated on first successful login
    - Unimus 2.2.0 will undergo a full pentest cycle, results will be published publicly on our Blog

  Optimization of device connection count during Discovery:
    - only open a single CLI session when only a single credential is available for a device
    - applies when credential discovery is not needed due to Credential Binding
    - more info: https://wiki.unimus.net/display/UNPUB/Discovery

  Rewrite of MikroTik RouterOS driver:
    - performance increases, average discovery on ROS down to ~9 seconds (from 21 seconds)
    - added handling for new CLI behaviors introduced in latest ROSv6 versions
    - added support for ROSv7

  Added support for:
    - ArubaOS v6
    - DrayTek VigorSwitch
    - Engage IPTube
    - FiberStore Campus switches
    - Hatteras / Overture Networks
    - Huawei USG
    - JunOS EVO
    - MikroTik RouterOS 7
    - Planet XGS switches
    - other various Planet switches
    - Ubiquiti Dream Machine (UDM)
    - Ubiquiti LTU / LTU-Pro

Fixes:
  Fixed a memory leak if a Core connection connected and disconnected frequently
  Fixed wrong Running Job state could be set on devices during heavy concurrency operations
  Fixed job history records would not be created for devices with extremely long addresses
  Fixed a running Network Scan not being stopped if it's Preset was deleted
  Fixed description missing in Mode Change Password binding (Devices > Edit)
  Fixed running job state could be reverted to a wrong state when Managing / Unmanaging devices while a job was running
  Fixed select all / deselect all and the selection model in general could break in the "Device credentials" table
  Fixed moving devices between Zones could cause the Zone Number to update even if device was not moved due to address conflict
  Fixed changing a user's role to visually break the Backups screen if the affected user was already on it
  Fixed possibility to add Comments to deleted objects if the Comment window was opened while object was deleted
  Fixed actions buttons not working properly in "Backups > Configuration" in specific cases
  Fixed wrong time formatting in "Use management > System access history > Session end" (values were correct in DB)
  Fixed "Other settings > Per-Tag connectors" would not properly show all configured ports for a connector
  Fixed attempting to remove all Users would throw an exception (will now properly remove all users other than yours)
  Fixed the Zones screen not properly refreshing when specific changes were done to Zones by another user
  Fixed select all on tables with extremely large amounts of objects could causing loading for a very long time
  Fixed enabling "Show all passwords" in the "CLI mode change passwords" table could cause bad behavior in the "Device credentials" table
  Fixed search in "Import history jobs" did not work
  Fixed the "port" field being formatted wrongly in the "Notifications > Email" screen
  Fixed changing a user's role to duplicate the Theme selector on the Dashboard if the affected user was already on it
  Fixed Credentials screen did not live-update changes to counters when credentials were Bound / Unbound by another user
  Fixed "Basic import > CSV file import" could throw exceptions to the UI when an invalid CSV file was provided
  Fixed possibility to add Device Access restriction without selecting and account, which resulted in an exception
  Fixed Comment icon column in the Schedules screen was not properly sized
  Fixed rare scenarios where upgrade from 2.0 or 2.1 to latest versions could fail
  Fixed possible invalid input in "Notification settings > Diff before and after lines"
  Fixed multiple rare errors on concurrent operation attempts on already deleted objects during multi-user workflows
  Fixed multiple other minor UI and UX issues and missing live value changes during multi-user workflows
  Fixed discovery failing on some models of Adtran TA
  Fixed discovery failing on JunOS-EVO devices
  Fixed discovery failing to recognize newer Planet switch types
  Fixed Config Push on MikroTik RouterOS could fail on specific commands with long output
  Fixed output formatting in Config Push on some MikroTik RouterOS versions could be broken
  Fixed backup could contain some extra unwanted data on some MikroTik RouterOS versions

Security fixes:
  Completely removed log4j library due to multiple exploits that were identified in this library
  Log out all other user's sessions if a user changes their password (other than the session changing the password)
  Log out all sessions of a user if their password is changed by another Administrator user
  Users logged out due to session timeout are redirected to the Login screen instead of just an overlay on their last screen
  Fixed user could remove Backup Filters applied to Tags the user didn't have access to
  Fixed users could re-run Push presents from output group context menu even if they didn't have access to do this
  Close currently opened "Show password" popups in the Credentials and "Device > Info" screens when a password is set to "High security mode"
  Close currently opened "Show password" popups in the Credentials and "Device > Info" screens when a user's role is changed to READ-ONLY
  Fixed Backups screen would not remove access to already opened device backups if access to a device was lost
  Fixed users without access to the Default Zone could still add devices through "Network Scan"
  Changed APIv2 to no longer expose credential passwords through Device endpoints (there was no way to control this), use APIv3 for credential access

  Fixed multiple instances of "live" access changes not working (screen change / reload was required to apply new access restrictions):
   - for all affected screens affected data will be added / removed immediately after accessibility is changed now
   - fixed Dashboard not listening to live device access changes
   - fixed Zones not listening to live access changes
   - fixed "Mass Config Push > Targets" not listening to live device access changes
   - fixed "Mass Config Push > Output groups" not listening to live device access changes
   - fixed "Other settings > Per-Tag connectors" not listening to live access changes
   - fixed Devices screen not listening to Zone-based device Tag live access changes (Tag propagations to Devices from Zones)
   - fixed "Basic import" not listening to live Zone access changes

Embedded Core version:
  2.2.0

Known issues:
  ISSUE: "Re-discover affected devices when Ports or Connectors change" Advanced Settings option does not work
  WORKAROUND: none
  STATUS: issue scheduled for fixing

  ISSUE: "Stop" in Config Push does not work
  WORKAROUND: none
  STATUS: issue scheduled for fixing

  ISSUE: Some screens in Unimus show time in server's time zone, others in client's (browser's) time zone
  WORKAROUND: none, issue only relevant if client has different time zone than server
  STATUS: we are debating on how to fix this - will likely create a setting to select which TZ should be used
]]>
<![CDATA[ Automating Cisco IOS updates with Unimus - Part 2 ]]> https://blog.unimus.net/automating-cisco-ios-upgrades-with-unimus-part-2/ 61fb33dae683410001ce9cd0 Tue, 22 Feb 2022 17:09:31 +0000 Intro

In Part 1 of our Cisco IOS upgrade automation series, we focused on a simple and quick solution to upgrade (or downgrade) Cisco IOS devices with just a couple of commands in Unimus, deployed through our Mass Config Push functionality.

Today we will continue this endeavor and show you a more detailed and advanced solution. This article attempts to create a one-stop solution by leveraging TCL scripting to make updating all your IOS devices easy. All IOS-powered devices should be able to update using this guide and script, regardless of whether they are a router or a switch, and also regardless of the product series. All at the same time.

Let's start with what we need. In Part 1 we already described that we need a server/device to source upgrade FW images from, Cisco IOS devices and Unimus. Here is an abstract component diagram:

We will be using the same "topology" as in the previous article, but we will build upon it with some additions:

  • FW image source - this time it will store not just the IOS FW images, but also two scripts. One will be a bash script which we will use to generate a list of FW images for devices to find an upgrade candidate in, and the other one will be a dedicated upgrade TCL script.
  • Cisco IOS devices. These will be downloading the TCL upgrade script, list of available images, and lastly the FW image file to update to (assuming the devices find an update candidate in the image list).
  • Unimus and our Mass Config Push feature (version 2.1.0 and newer) with a slightly different set of commands to push compared to Part 1's preset.

Before we start, let us add a disclaimer. Large scale infrastructure automation is almost never easy. While working on this script, I encountered a number of quirks I had to deal with that were unique to one device but not to another. Keep in mind that it is possible your Mass Config Push preset will finish with errors - this is possible and I expect that to happen.

If errors happen, don't hesitate to check out the troubleshooting FAQ at the end or contact us. It will take a number of picky devices to iron out all weird behaviors and inconsistencies across all the various IOS versions. At least that was definitely my own experience.

And lastly - don't forget to test everything in a lab environment before deploying to your production network.

Preparing the Image / FW source

Contrary to Part 1, we will have higher requirements for the image source server. Once again, we will use a VM running Linux for our showcase. As outlined above, we will be generating a list of available FW images with their MD5 sums for the upgrade TCL script to verify the downloaded file's MD5 sums against. Checking image validity before deploying it is one of the first big features of this upgrade process.

Let's reiterate. The server VM will now "serve" the following purposes:

  • We will serve our IOS images from this machine. Note that we are focusing only on IOS images in .bin format, not archived images in .tar format. This is done so that we can can avoid as many issues as possible with devices with insufficient free space, or ones with smaller flash capacity. These wouldn't be able to fit multiple IOS image even if their flash was empty.
  • We will generate a sorted list (supporting IOS versioning) of FW images with their respective MD5 sums and make this available for our devices to download.
  • We will serve our upgrade TCL script. Our IOS devices will download and execute this to find an update image candidate, download it, verify it, and set it as the boot image.

FW Transfer methods

Just like last time, we are focusing on SCP and HTTP protocols. Usually we see networkers using TFTP or FTP, but we seldom see anyone choosing SCP or HTTP. SCP and/or HTTP are much more robust protocols and we wanted to showcase these, albeit less popular options.

If you are interested in more information on both, including some probably necessary steps to make SCP work with all your devices, please refer to the Part 1 article. Alternatively jump to the third point of the troubleshooting FAQ at the end of the article, where we describe possible issue with SCP and how to deal with them.

With that said, let's prepare our FW image source server. First, where to put the files:

SCP

If you opt for SCP you will be placing your files into the home folder of a user you choose. Let's say we create a special user called unimus for this. In this case, you are going to place your files into that user's home folder, which will be, by default, /home/unimus.

HTTP

If you opt for HTTP you will be using your site's root folder. In our case, we use NGINX and the root folder for a web location we will use is in var/www/ciscoiosupgrade.netcore.internal.

Choose the directory according to the method you chose for the IOS image hosting. Note that in both case the script only supports files placed in the main (root) directory from where they are being served, not sub-directories.

In these directories you also need to place both scripts, which you can find below. And with that being said, let's introduce both scripts and look at the whys and whats of each of them.

Overview of the FW image parsing bash script

This is an absolutely basic script, but still neat. This script simply takes all files with a .bin extension in the same directory, runs them through Linux md5sum command, sorts the output intelligently using sort with the -V argument to take file versioning into the account, and outputs a finished file called fwlist.

#!/bin/bash

#Set a working directory in the current directory
cd "${0%/*}"

#Extract file names and MD5 sums, sort them and output into a file
md5sum *.bin | sort -V -k2 > fwlist

Here's the example how the finished list can look like

a63c90cc3684ad8b0a2176a6a8fe9005  c180x-advipservicesk9-mz.151-4.M12a.bin
6d0bb00954ceb7fbee436bb55a8397a9  c1900-universalk9_npe-mz.SPA.158-3.M7.bin
28518159ba5f75ef0eeb9617fd35e2ba  c2800nm-advipservicesk9-mz.124-24.T4.bin
441018525208457705bf09a8ee3c1093  c3750e-ipbasek9-mz.122-55.SE5.bin
862dec5c27142824a394bc6464928f48  c3750e-universalk9-mz.122-55.SE5.bin
fd4b38e94292e00251b9f39c47ee5710  c3750e-universalk9-mz.152-4.E10.bin
1f94dacb4faf2829b0ffbb25ebd62e2e  c3750-ipbasek9-mz.150-2.SE5.bin
b28cf0ed5cc0d1928ea4f6656e1c8dde  c3750-ipservicesk9-mz.122-55.SE12.bin
871bdd96b159c14d15c8d97d9111e9c8  cat4500-ipbasek9-mz.150-2.SG11.bin
3287282fa1a1523a294fb018e3679872  s72033-adventerprise_wan-vz.122-33.SXI14.bin
a302a771ee0e3127b8950f0a67d17e49  s72033-ipbase-mz.151-2.SY16.bin
bbf7c6077962a7c28114dbd10be947cd  s72033-ipservicesk9-mz.151-2.SY16.bin

Overview of the upgrade TCL script

tclsh

#Load and store list of available FW images
set fwrawlist [read [open fwlist]]

#Retrieve FW type of the current device for further processing
set devicefwdirty [exec show version | include System image file is]
#Get a full name of the current FW
regexp -all {:(.*?)\"} $devicefwdirty junk devicefwfull
#Get a FW type with a major release version, this prevents issues with some devices which might not be compatible (mainly storage constrains) with the next major release version
regexp -all {:(.*?-.*?-.*?\.\d\d).*?.bin} $devicefwdirty junk devicefwrelease

#Find FW update candidate
puts "Finding a viable FW update candidate..."
#Process the list, return the latest match (it will be the latest FW image)
set fwlistparsed [regexp -all -line "\\s{2}($devicefwrelease.+?.bin)$" $fwrawlist junk down_file]
#If no match is found, abort the script
if {$fwlistparsed == 0} {
    puts "List of available FWs does not contain any update candidate. Aborting..."
    return
}
#Run fullname string comparison, if matched, current and matched FW are identical, abort the script
if {[string compare $devicefwfull $down_file] == 0} {
    puts "Current and matched FW image are identical. Aborting..."
    return
}
#If the current and matched FW are not identical, start comparing them on each level
#Compare major release version
regexp -all {\.([0-9]{1,4})\-} $devicefwfull junk curfwmatch1
regexp -all {\.([0-9]{1,4})\-} $down_file junk newfwmatch1
if {$curfwmatch1 == $newfwmatch1} {
    #Major release version is identical, compare minor release version
    regexp -all {\-([0-9]{1,3})\.[a-zA-Z]*?[0-9]*?[a-zA-Z]*?\.bin} $devicefwfull junk curfwmatch2
    regexp -all {\-([0-9]{1,3})\.[a-zA-Z]*?[0-9]*?[a-zA-Z]*?\.bin} $down_file junk newfwmatch2
    if {$curfwmatch2 == $newfwmatch2} {
        #Minor release version is also identical, compare revision
        regexp -all {\-[0-9]{1,3}\.[a-zA-Z]*?([0-9]*?)[a-zA-Z]*?\.bin} $devicefwfull junk curfwmatch3
        regexp -all {\-[0-9]{1,3}\.[a-zA-Z]*?([0-9]*?)[a-zA-Z]*?\.bin} $down_file junk newfwmatch3
        if {$curfwmatch3 == $newfwmatch3} {
            #Revision is also identical, in this case it suggest some other problem in versioning or FW naming
            puts "Current and matched FW image and their (numeric) version seem identical, but full names are not. Aborting..."
            return
        } elseif {$curfwmatch3 > $newfwmatch3} {
            puts "Current FW image is newer than the matched one from the list of available FWs. No update candidates were found. Aborting..."
            return
        } elseif {$curfwmatch3 < $newfwmatch3} {
            puts "Update candidate found."
        } else {
            puts "Unknown error occurred during FW matching. Aborting..."
            return
        }
    } elseif {$curfwmatch2 > $newfwmatch2} {
        puts "Current and matched FW image and their (numeric) version seem identical, but full names are not. Aborting..."
        return
    } elseif {$curfwmatch2 < $newfwmatch2} {
        puts "Update candidate found."
    } else {
        puts "Unknown error occurred during FW matching. Aborting..."
        return
    }
} elseif {$curfwmatch1 > $newfwmatch1} {
    puts "Current and matched FW image and their (numeric) version seem identical, but full names are not. Aborting..."
    return
} elseif {$curfwmatch1 < $newfwmatch1} {
    puts "Update candidate found."
} else {
    puts "Unknown error occurred during FW matching. Aborting..."
    return
}

#Download FW update
#Read common arguments, abort if mandatory arguments are missing, and decide which protocol will be used
set down_prot [lindex $argv 0]
if {[string length $down_prot] == 0} {
    puts "No argument was defined, please add arguments to your MCP in Unimus where you execute this command. Aborting..."
    return
}
set down_addr [lindex $argv 1]
if {[string length $down_addr] == 0} {
    puts "Second argument (address) is missing. Aborting..."
    return
}
#Read HTTP specific arguments and build download URL
if {[string compare http $down_prot] == 0} {
    set down_port [lindex $argv 2]
    if {[string length $down_port] == 0} {
        #Use port 80 if no custom port is defined
        set down_port "80"
    }
    set down_url "http://$down_addr:$down_port/$down_file"
#Read SCP specific arguments and build download URL
} elseif {[string compare scp $down_prot] == 0} {
    set down_user [lindex $argv 2]
    if {[string length $down_user] == 0} {
        puts "Third argument (user) is missing. Aborting..."
        return
    }
    set down_pass [lindex $argv 3]
    if {[string length $down_pass] == 0} {
        puts "Fourth argument (password) is missing. Aborting..."
        return
    }
    set down_url "scp://$down_user:$down_pass@$down_addr/$down_file"
} else {
    puts "Unrecognized protocol. Aborting..."
    return
}
puts "Downloading firmware..."
set down_result [exec copy $down_url flash:]
#Evaluate download result
if {[regexp {bytes copied} $down_result]} {
    puts "Update FW image was downloaded successfully."
} elseif {[regexp {Not enough space} $down_result]} {
    if {[regexp {Not enough space} $down_result]} {
        puts "Error occurred during download - insufficient space left on device. Aborting..."
        return
    }
} elseif {[regexp {Protocol error} $down_result]} {
    puts "Error occurred during download - protocol error. Aborting..."
} elseif {[regexp {busy} $down_result]} {
    puts "Error occurred during download - device is busy. Aborting..."
} else {
    puts "Unknown error occurred during download. Aborting..."
    return
}

#Validate MD5 of the downloaded FW image
puts "Validating integrity..."
#Run validation for the downloaded FW image
set down_file_md5check [exec verify /md5 $down_file]
regexp -all -line "=\\s{1}(.+?)$" $down_file_md5check junk down_file_md5
regexp -all -line "(.+?)\\s{2}$down_file" $fwrawlist junk fwmd5tocomp
#Compare both MD5 sums
if {[string compare $down_file_md5 $fwmd5tocomp] == 1} {
    puts "Update FW image validated successfully."
} else {
    puts "Unknown error occurred when validating update FW image integrity, MD5 sums do not match. Aborting..."
    return
}

#Set up system boot image with the downloaded update FW image
puts "Updating..."
ios_config "boot system flash:$down_file"
puts "Update is ready. Please run your reload MCP preset..."

Here is a breakdown of the workflow of this script:

  • Script ingests a list of available FW images from the pre-generated list we prepared earlier (with the bash script, the fwlist file).
  • Script checks the current version of IOS running on the device.
  • Script compares the current IOS version to all available FW images - note we are matching only the same major release version (if your device is running a 12.X release, you will be able to upgrade only to a newer version of the 12.X release, not to a newer major release like 15.X or 17.X).
  • If the script matches multiple upgrade candidate image files, it will always choose the last match, which will be the latest FW image (thanks to version-aware sorting in the list of FW images).
  • Script processes input arguments, evaluates them and builds a download URL for the device to download the new FW image.
  • A new FW image is downloaded. Its integrity verified and compared to the MD5 sum from our known good sums on the image server.
  • If the integrity checks out, the script sets up the FW image as the boot image and returns a final message informing the user to reload the device.

We intentionally don't reload devices here - you can probably imagine the problems it would cause in real networks where a switch higher in the network topology would finish and reload while devices inherently relying on its activity would then fail their downloads and consequently the Mass Config Push from Unimus with it.

As you can see in the script, we tried our best to add a number of comments to describe parts of the code and what they do to make it easier for anyone to understand and even modify the code according to their needs.

Preparing Unimus and Mass Config Push presets for IOS upgrade

Here are the Config Push presets we will be using to pull the FW images and perform the upgrade and reload the devices:

Config Push preset 1 - Upgrade devices

Just like the last time, we will run the upgrade in a single Config Push preset. This preset will download the necessary files. Note the use of tclsh, or TCL shell, and log_user command set to 0 before running copy commands - we did this to suppress the outputs of those two commands, which would otherwise generate some unpredictable output, creating unwanted output groups. This way, we make sure that any actual output will be generated by the script itself.

SCP

tclsh
log_user 0
exec "copy scp://SCP_USER:SCP_PASS@FW_SRC_ADDR/fwlist flash:"
exec "copy scp://SCP_USER:SCP_PASS@FW_SRC_ADDR/ios_upgrade.tcl flash:"
tclquit
tclsh ios_upgrade.tcl PROTOCOL FW_SRC_ADDR SCP_USER SCP_PASS

Where:

SCP_USER - SCP user
SCP_PASS - SCP password
PROTOCOL - scp or http - protocol used to download an FW image
FW_SRC_ADDR - IP or hostname of FW image source device

Please replace all the example values with your actual ones. Don't forget to check Require "enable" (privileged-exec) mode for this Config Push.

HTTP

tclsh
log_user 0
exec "copy http://FW_SRC_ADDR/fwlist flash:"
exec "copy http://FW_SRC_ADDR/ios_upgrade.tcl flash:"
tclquit
tclsh ios_upgrade.tcl PROTOCOL FW_SRC_ADDR FW_SRC_PORT

Where:

PROTOCOL - scp or http - protocol used to download an FW image
FW_SRC_ADDR - IP or hostname of FW image source device
FW_SRC_PORT - OPTIONAL - define this argument only if your webserver is listening on a port other than 80, otherwise remove this argument altogether, script will default to port 80

Please replace all the example values with your actual ones. Don't forget to check Require "enable" (privileged-exec) mode for this Config Push.

Config Push preset 2 - Reload devices

tclsh
exec "reload in 3"
tclquit

This is a simple preset to reload devices and set it to be executed in 3 minutes. Feel free to change it to your needs, there are two formats for reload in command - MMM or HHH:MM, or you can change it to reload at and define a specific date and time for the reload instead.

Let me quickly address why we use the TCL shell to execute a simple reload - in my testing I encountered an inconsistent sequence of prompts when sending the reload command, and I can imagine there can be even more variations between other devices and IOS versions. Use of the TCL shell handles the issue - it just works without needing to handle various IOS inconsistencies.

Example of a successful run

And here's how we want to see the upgrade Config Push preset to go:

While we wish everyone had exactly the same results as we did, some of your devices might not finish successfully. While the most script-specific errors are self-explanatory, let us also add a troubleshooting FAQ below to help you understand and hopefully fix most of the errors reported by Unimus or the script.

Troubleshooting FAQ

Unimus returned INTERACTION_ERROR

This error is caused by Unimus not receiving any recognizable output from a device before a timeout runs out, which is 20 seconds by default for Cisco devices.

We already touched on overriding timeouts via Advanced settings in Part 1, but we would like to add some more information here. How to find the right value for you?

One of the possible ways to go about this is the trial-and-error approach. You can try to increase it gradually (1 minute, 2 minutes, etc.). An alternative would be to try a larger value (e.g. 5 minutes like in my case) right away. We would recommend the latter, as you can have hundreds of devices fail with this error, and choosing random devices and trying to time image download manually to find out if a certain timeout is enough would be time-consuming.

One of our Config Push features which will come in useful is using the context menu of any push output group. You can choose the option to rerun the preset only on devices in that one output group. You can progressively try increasing the timeout and keep adjusting until there is no device left in the error output group. Alternatively, clone the preset with devices from the error output group to continue tuning only those devices.

If you get INTERACTION_ERROR from the timeout while the FW image was downloading, then unfortunately after Unimus terminates the session, your device(s) will continue downloading the file in the background. So if you quickly increase your timeout and rerun this preset on these devices, the ones with such a background download still running will finish this push with a device is busy error (see below section) returned by the script.

Keep in mind that INTERACTION_ERROR could also include some outliers, such as devices that use a different syntax for commands or syntax for response prompts that the script might not catch, so don't go to extreme lengths with this timeout. If there are devices that you can't fix by increasing the timeout, then stop and let us know. As mentioned in the preface, you should expect some errors and tuning to make large scale automation work seamlessly across a large fleet of different IOS devices.

Script returned error message Error occurred during download - device is busy. Aborting...

In the preceding section, we described a specific case in which device(s) with INTERACTION_ERROR can create an output group with this script-specific error after a quick rerun of the preset.

This error can show up when your device cannot download a chosen FW image in time and you rerun your preset on them before they had a chance to finish the download. IOS's copy command return a Device or resource busy error, as it is already downloading the file in the background. Time is the best cure here. Leave devices in this output group for a couple of minutes and try again.

Script returned error message Error occurred during download - insufficient space left on device. Aborting...

Your device doesn't have enough free space left on it. In the section above, we also mentioned a potential issue when the script failed due to INTERACTION_ERROR, but the device might have kept downloading the firmware. Alternatively, your device may effectively not be able to store more than a single FW image file as its flash is already consumed by your current running FW image.

If that is your case, add the command del /force *.bin to the Config Push preset just after log_user 0 and before the copy command for downloading the fwlist file. This will cause any file with a .bin extension to be removed from the root of your flash (it is not recursive).

Be careful not to delete anything important, though. This will delete any file with .bin extension. This has (hopefully) obvious drawbacks - it can leave you with an unbootable device if the currently configured boot IOS image gets deleted and a power failure occurs during the update to a new image. Caution recommended.

Script returned error message Error occurred during download - protocol error. Aborting...

This script-specific error most likely suggests an SSH-related problem and is likely related to unsupported KEX algorithms, Ciphers or Host Key Algorithms offered by your IOS device and rejected by your FW image source server. You might have never seen such a problem when interacting with Unimus - that is because Unimus is more liberal with KEX and other crypto than your typical OpenSSH's installation by default (we know many networking devices use older crypto protocols).

If this happens to you, I would recommend adding the diffie-hellman-group1-sha1 KEX to your SSH server's config file. It should resolve most of the devices with this error. If that is not the case, you may want to try manually connecting from your Cisco device to the FW image source server, or you can turn on terminal monitor and debug for SCP with these commands

terminal monitor
debug scp all

and try to manually download, for example, the fwlist file from the image source server with:

copy scp://unimus:scppass8520@10.30.50.70/fwlist flash:

Here's an example of the output you can expect:

cisco#copy scp://unimus:scppass8520@10.30.50.70/fwlist flash:
Destination filename [fwlist]?
%Error opening scp://unimus:scppass8520@10.30.50.70/fwlist (Protocol error)
cisco#
*Feb  5 05:31:45.181: SSH2 CLIENT 0: kex algo not supported: client diffie-hellman-group1-sha1, server curve25519-sha256,curve25519-sha256@libssh.org
cisco#

From the information in the example above, you would then add diffie-hellman-group1-sha1 to your server's SSH configuration.

Script returned an error message Unknown error...

If you encounter this error, let us know. This indicates an unexpected error and will require some debugging.

There is also a topic on our forums you can use to report any issues, provide feedback or ask questions: https://forum.unimus.net/viewtopic.php?f=11&t=1426

]]>
<![CDATA[ Automating Cisco IOS updates with Unimus - Part 1 ]]> https://blog.unimus.net/automating-cisco-ios-upgrades-with-unimus-part-1/ 61fb33d1e683410001ce9cca Wed, 16 Feb 2022 16:24:23 +0000 Intro

We believe that keeping the firmware / software of networking devices up-to-date is one of the most important security benefits in any network. Sadly, this is much easier said than done in any network at scale. Previously, we brought you a guide on automating network-wide upgrades of MikroTik's RouterOS (link here).

This time, we tackle automating upgrades of Cisco IOS devices. We will do so in 2 parts, using 2 different approaches - this article is the first part where we will focus on providing a simple and quick solution.

This solution requires 3 main components:

  • FW image source - a server which will serve as a source / host for FW image(s) used for upgrade
  • Cisco IOS devices to upgrade, which will be downloading a provided FW image
  • Unimus' Mass Config Push feature to push commands to your devices and automate the FW upgrades

Here is a conceptual diagram of how our testing topology looks like:

Unimus will be used to perform and automate the IOS upgrades, but also to help us sort and organize IOS upgrade results into output groups, so we don't have to examine every single device output manually. Rather, Unimus will group devices by the outputs we receive to easily identify successful and / or failed upgrades - if any pop up.

Preparing the Image / FW source

You will need a server to serve / host your FW image(s) from. Any server which your Cisco IOS devices can download files from will suffice. For our showcase, we will use a VM running Linux. This way we can host our images easily and can leverage pretty much any transfer method we choose.

FW Transfer methods

When it comes down to transferring files between our FW source and IOS devices, we chose two protocols for this showcase: SCP and HTTP. Usually we see administrators relying on TFTP or FTP, but we seldom see anyone choosing SCP or HTTP. Thus we decided to showcase these, albeit less popular options.

SCP

If you choose SCP, you may need to make some adjustments on your FW image server. As SCP uses the SSH protocol, there is one potential issue you will have to deal with. If you have some older Cisco IOS devices, and especially those which don't offer any sort of addition/exclusion/enforcement of more secure KEX (Key Exchange), Ciphers and Host Key Algorithms; and your source server runs a generally newer SSH server - you may need to enable one or more of the legacy algorithms which your devices are capable of supporting.

We would recommend adding the diffie-hellman-group1-sha1 KEX to your SSH server's config file as a precaution. It should resolve most of these potential issues. If that is not the case though, you will need to check one or more devices manually. Try to download the FW image manually and optionally also turn on terminal monitor and SCP debugging.

HTTP

If you already use / run a web server, then choosing HTTP will save you time. There shouldn't be any extra configuration required on either side - assuming your IOS devices will be able to access your web server.

Preparing Unimus and Mass Config Push preset for Cisco IOS upgrade

Here are the Config Push presets we will be using to pull the FW images and perform the upgrade and reload the devices:

Config Push preset 1 - Upgrade devices

We start with the main Config Push preset, which will do the upgrade - without reloading the devices. We don't want to reload immediately after pulling an image, as that would likely cause problems (due to reloading devices in wrong / random order and causing connection loss). Some devices could reload sooner than others finished their FW transfer, and this could cause pushes to fail for some of our devices.

IOS image transfer using SCP

tclsh
log_user 0
exec "copy scp://SCP_USER:SCP_PASS@FW_SRC_ADDR/UPGRADE_FW_IMAGE flash:"
ios_config "boot system flash:UPGRADE_FW_IMAGE"
tclquit

Where:

SCP_USER - SCP user
SCP_PASS - SCP password
FW_SRC_ADDR - IP or hostname of FW image source device
UPGRADE_FW_IMAGE - file name of a chosen upgrade FW image

Please replace all the example values with your actual ones. Don't forget to check Require "enable" (privileged-exec) mode for this Config Push.

IOS image transfer using HTTP

tclsh
log_user 0
exec "copy http://FW_SRC_ADDR/UPGRADE_FW_IMAGE flash:"
ios_config "boot system flash:UPGRADE_FW_IMAGE"
tclquit

Where:

FW_SRC_ADDR - IP or hostname of FW image source device
UPGRADE_FW_IMAGE - file name of a chosen upgrade FW image

Please replace all the example values with your actual ones. Don't forget to check Require "enable" (privileged-exec) mode for this Config Push.

If you know some of your devices are slow (will take along time to download the IOS image), then we recommend one more step before running the Mass Config Push preset. In my case, I know one of mine devices required as much as 5 minutes to download the IOS image, which would otherwise hit one of Unimus default timeouts. If this is your case as well, we recommend overriding timeouts and giving devices a much longer time to finish. Here's how you can do it via the Advanced settings of a Push Preset:

In my case, I extended it to 5 minutes = 300 seconds = 300,000 milliseconds.

Config Push preset 2 - Reload devices

tclsh
exec "reload in 1"
tclquit

This is a simple preset to reload devices and set it to be executed in 1 minute. Feel free to change it to your needs, there are two formats for reload in command - MMM or HHH:MM, or you can change it to reload at and define a specific date and time for the reload instead.

Let me quickly address why we use the TCL shell to execute a simple reload - in my testing I encountered an inconsistent sequence of prompts when sending the reload command, and I can imagine there can be even more variations between other devices and IOS versions. Use of the TCL shell handles the issue - it just works without needing to handle various IOS inconsistencies.

Quickly setting up a HTTP server

While not recommended for production use, on Linux you can very quickly and easily spin up a HTTP server which will serve files in the current directory by running:

python -m http.server

You can use this to quickly and easily host images for your devices to pull from. The URL to use in your Config Push presets would look something like this:

http://your_machine_ip:8000/image_name.bin

Example of a successful run

And here's how we want to see the upgrade Mass Config Push preset to go:

While we wish everyone had exactly the same results as we did, some of your devices might not finish successfully. It is possible you may encounter errors, such as some devices not finishing the download in time, some devices not being able to download the file at all, or some other small changes in syntax on some devices that are unaccounted for.

If any such issue occurs, Unimus will inform you via its own error assessment of the failed pushes, such as CONNECTION_ERROR, COMMAND_UNSUPPORTED, INTERACTION_ERROR, etc. INTERACTION_ERROR may be a bit more difficult to troubleshoot - as the reasons for this error may vary.

In Part 2 of this series we will improve the upgrade process significantly by using TCL scripting. There will be an added layer of error detection by the upgrade TCL script itself, which should handle and help identify most of the failure scenarios you can run into.

Keep an eye on our blog for the Part 2 of this series coming soon! There is also a topic on our forums you can use to provide feedback or ask questions: https://forum.unimus.net/viewtopic.php?f=11&t=1426

]]>
<![CDATA[ RouterOS and MTU - a collection of useful scripts ]]> https://blog.unimus.net/routeros-and-mtu-a-collection-of-useful-scripts/ 6201bc9be683410001ce9f49 Tue, 08 Feb 2022 15:20:44 +0000 MTU on MikroTik's RouterOS is something you usually don't deal with - that is until you have to deal with it because things stopped working. Alternatively, you decide to implement MPLS in your network, and then usually MTU becomes one of the things you deal with daily.

In this post, I want to share a few useful MikroTik scripts for dealing with MTU.

1. Auditing L2 MTU

Let's start by looking into how you can find out what's the max L2 MTU your network can currently safely transit. Please note the script below will check only physical interfaces, if you already have some virtual interfaces (like VPLS or any PPP-type interfaces) you would like to check, you will need to adjust the script.

I will assume you already calculated the maximum size of L2 frames you need on your network. In this case, checking if they can safely transit your RouterOS devices is fairly simple:

{
:local minimalMtu 1600

:local mtuCheck do={
  :if ([/interface get $1 l2mtu] < $2) do={
    :put ("Interface " . [/interface get $1 name] . " has MTU under " . $2)
  }
}

# ethernet
:foreach i in=[/interface ethernet find running] do={
  $mtuCheck $i $minimalMtu
}

# wireless
:foreach i in=[/interface wireless find disabled=no] do={
  $mtuCheck $i $minimalMtu
}
}

The script does some filtering - such as only checking running ethernet interfaces and non-disabled wireless interfaces. Adjust as required.

2. Setting max L2 MTU on all ports

This would be an equivalent of enabling jumbo frames on other vendor's gear. If you are asking "how do I enable jumbo frames on MikroTik / RouterOS", this is the answer. We simply allow as large L2 frames on all physical interfaces as our hardware allows.

# ethernet - set maximum supported L2MTU by hardware
/interface ethernet
:foreach i in=[find] do={
  set $i l2mtu=[/interface get $i max-l2mtu]
}

# wireless - max supported L2MTU is 2290
/interface wireless
:foreach i in=[find] do={
  set $i l2mtu=2290
}

Please note this will flap each port affected by an MTU change. This may result in traffic getting dropped for a few seconds, OSPF sessions droping their state, etc.

This should be safe to run on your devices (other than the above mentioned link flap), as it simply sets maximum allowed L2 frame size, without doing anything to L3 packet MTUs.

3. Auditing L3 MTU

Now let's jump into setting L3 MTU. The issue with L3 MTU is usually oposite to L2 MTU. In most cases, you want to keep L3 MTU at 1500 (however much we wish we could transit larger packets over the internet...).

{
:local targetMtu 1500

:foreach i in=[/ip address find] do={
  :local iface [/ip address get $i interface]

  :if ([/interface get $iface actual-mtu] != $targetMtu) do={
    :put ("L3 MTU on interface " . [/interface get $iface name] . " is not " . $targetMtu)
  }
}
}

PPP-like interfaces might be an exception from the 1500 L3 MTU rule for you. If so, adjust the script as required.

We use the actual-mtu property of interfaces for checking. This is useful because some protocols (PPP-like and VPN interfaces) support MTU negotiation, so even if you configure mtu at 1500, the other side might not support this, and actual-mtu might be lower.

4. Setting L3 MTU

Here we will set all interfaces with non-1500 L3 MTU to 1500. I included filtering of EoIP and PPP-like interfaces here as an example.

{
:local targetMtu 1500
:local filterTypes "eoip|ppp-out|l2tp-out"

:foreach i in=[/ip address find] do={
  :local iface [/ip address get $i interface]

  :if ([/interface get $iface type] ~ $filterTypes) do={
    :put ("Ignoring interface " . [/interface get $iface name] . " due to filter")
  } else={
    :if ([/interface get $iface mtu] != $targetMtu) do={
      :put ("Updating MTU to " . $targetMtu . " on " . [/interface get $iface name])
      /interface set $iface mtu=$targetMtu
    }
  }
}
}

You can notice we filtered based on interface type. You can see types of interfaces like this:

/interface
:foreach i in=[find] do={
  :put ("Type of interace " . [get $i name] . " is " . [get $i type])
}

5. Checking and setting MPLS MTU

Finally let's see how to check, and set, the MPLS MTU. Here we have 2 simple scripts. One to check MPLS MTU:

{
:local targetMtu 1550

:if ([/mpls interface get [/mpls interface find default=yes] mpls-mtu] = $targetMtu) do={
  :put "MPLS default interface MTU is CORRECT"
} else={
  :put "MPLS default interface MTU is WRONG"
}
}

We structure the commands in an if-else block on purpose, so Unimus' Config Push output grouping will nicely group all devices with correct (and incorrect) MTU into 2 groups.

And finally here is a small script to set MPLS MTU:

{
:local targetMtu 1580

/mpls interface
set [ find default=yes ] mpls-mtu=$targetMtu
}

Outro

I hope these scripts can make dealing with MTU a little easier for you. If you want to discuss the scripts (or anything related to this topic), please check the forum topic corresponding to this blog post on our forums.

]]>
<![CDATA[ Unimus Backup Exporter ]]> https://blog.unimus.net/unimus-backup-exporter/ 61fb270ee683410001ce9ca4 Fri, 04 Feb 2022 02:14:58 +0000 Unimus allows you to backup all of your device configurations into a convenient and accessible platform. However, you may want to keep copies of your backups stored offsite in the case of a catastrophic failure that removes your ability to use your Unimus server.

To solve this problem, we created the Unimus Backup Exporter. This script will allow you to download all backups from your Unimus server. Once you download them, you may want to export them offsite, for example to AWS S3 using the AWS CLI, or possibly to a local NFS share. We provide built-in GIT functionality in the Exporter script as another alternative solution if desired.

Let us show you how you can use the Exporter in a few steps.

Step 1 - Preparing Unimus and creating an API token

The First step we need to take is to create an API token for the Exporter script. After you log into your Unimus system, you will find a User management section on the sidebar. After opening this section, at the bottom of the page, you will see API tokens. Click add to create a new token, and keep this page open for a later step.

creating API token in Unimus

Step 2 - Downloading the Unimus Backup Exporter

Navigate to the directory where you would like to install the script from your command line, and we can use wget to download it. Then we will need to unzip the script and make it executable. For example:

[~] $ mkdir unimus-exporter
[~] $ cd unimus-backups
[~/unimus-exporter] $ wget https://github.com/netcore-jsa/unimus-backup-exporter/releases/latest/download/unimus-backup-exporter.zip
[~/unimus-exporter] $ unzip -x unimus-backup-exporter.zip
[~/unimus-exporter] $ chmod +x unimus-backup-exporter.sh

Step 3 - Setting up your config file.

Now that we have our script downloaded, we need to configure an env file to tell it how to access our server. We will do this in the following two parts.

Part 1 - Local backups

Provided is a sample env file called unimus-backup-exporter.env. In this file, you will find examples of all options you can use to configure your script.

The following settings will export all of your backups to your local filesystem in a backups directory in the same directory as the script. In Step 1, we kept open the page with the Unimus API key. We will use that here in the API key section. Please note, everything should be in double quotes. Any options not being used can be removed or commented out with a preceding #.

unimus_server_address="http://foo.bar:8085"
unimus_api_key="insert api key here"
backup_type="latest"
export_type="fs"

backup_type has two options:

  • latest - pulls the most recent backups from Unimus
  • all - pulls all backups from Unimus

Any options not used can be removed or commented out with a preceding #, as shown below.

#git_username=”foo”

After running the script, you will see all of your backups in the backups directory, listed by IP address and the Unimus device ID.

[~/unimus-exporter/backups] $ ls -l
total 2
drwxr-xr-x 1 user group 4096 Jan 18 09:38 '192.168.4.1 - 6'
drwxr-xr-x 1 user group 4096 Jan 13 23:47 '192.168.3.1 - 2'
[~/unimus-exporter/backups] $

If you only want to export to your local file system, you can skip part 2 of this step and go to Step 4.

Part 2 - Pushing to Git

Some users will want to have the option to move the backups offsite. We have included GIT functionality into the script to make this easier. Inside of your unimus-backup-exporter.env file, you need to add some additional settings. These options are going to be dependent on your GIT server / repo. The most common options are going to be the following.

git_username="foo"
git_email="foo@bar.org"
git_server_protocal="http"
git_server_address="192.168.4.5"
git_port="22"
git_repo_name="User/unimus-backup-exporter"
git_branch="master"

You may also need to set the git_password option if you use password authentication.

git_password="password"

git_server_protocal has three options:

  • http - pushes to GIT using HTTP
  • https - pushes to GIT using HTTPS
  • ssh - pushes to git using SSH

Please note that you must add the private SSH key to use to authenticate to the server before running the script if you are using SSH key authentication.

Once you have these set correctly, we are ready to run the script!

Step 4 - Running the Exporter

Now that you have a working configuration file, we can execute the script by running the following command in the script directory.

[~/unimus-exporter] $ ./unimus-backup-exporter.sh
Getting device data
Getting Device Information
Exporting latest backups
2 backups exported
Export successful
Script finished
[~/unimus-exporter] $ 

After running the script, you will see the following directory structure in the backups directory.

[~/unimus-exporter/backups] $ ls -l
total 2
drwxr-xr-x 1 user group 4096 Jan 18 09:38 '192.168.4.1 - 6'
drwxr-xr-x 1 user group 4096 Jan 13 23:47 '192.168.3.1 - 2'
[~/unimus-exporter/backups] $

If you are exporting to Git, you will see something similar to the following output.

[~/unimus-exporter] $ ./unimus-backup-exporter.sh
Getting device data
Getting Device Information
Exporting latest backups
2 backups exported
Export successful
Pushing to git
Initialized empty Git repository in /home/user/unimus-exporter/backups/.git/
[master (root-commit) 5fabcf7] Initial Commit
 2 files changed, 878 insertions(+)
 create mode 100644 192.168.3.1 - 2/Backup 192.168.3.1 2021-02-16-03:00:43-EST 2.txt
 create mode 100644 192.168.4.1 - 6/Backup 192.168.4.1 2021-11-01-03:00:27-EDT 6.txt
Enumerating objects: 6, done.
Counting objects: 100% (6/6), done.
Delta compression using up to 32 threads
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 7.61 KiB | 3.80 MiB/s, done.
Total 6 (delta 0), reused 0 (delta 0)
remote:
remote: The private project user/test_unimus_git was successfully created.
remote:
To ssh://192.168.4.5/user/test_unimus_git.git
 * [new branch]      master -> master
Everything up-to-date
Push successful
Script finished
[~/unimus-exporter] $

If using GitHub or GitLab, you should be able to see the backups in the web interface.

Backups list

Step 5 - Automating the exporter

Typically, you will want to run the Exporter script periodically to ensure you have the latest backups. To do this, you can schedule a cron job. Given the script shouldn’t need any elevated privileges, adding the following line to your user’s crontab -e will set up the script to run every night at 3 AM. You should schedule the script to run after Unimus is done backing up your devices.

0 3 * * * /path-to-script/unimus-backup-exporter.sh

A log is generated in the script’s directory. Your output should look like this.

Log File - 2021-11-04 21:46:16
2021-11-04 21:46:16 Getting device data
2021-11-04 21:46:16 Getting Device Information
2021-11-04 21:46:17 Exporting latest backups
2021-11-04 21:46:17 20 backups exported
2021-11-04 21:46:17 Export successful
2021-11-04 21:46:17 Pushing to git
2021-11-04 21:46:19 Push successful
2021-11-04 21:46:19 Script finished

You can use the log to monitor results and create alarms and notifications if the export fails for any reason. If the script fails to finish, you will receive an error message.

Log File - 2021-11-04 21:49:40
ERROR: 2021-11-04 21:50:01 Unable to connect to unimus server

That’s all that is needed to set up our exporter and automate it to run daily.

]]>
<![CDATA[ Automating FRR backups with Unimus - a how-to guide ]]> https://blog.unimus.net/automating-frr-backups-with-unimus-a-how-to-guide/ 6155a969ed7c6b0001874b81 Tue, 07 Sep 2021 08:17:00 +0000 The goal of Unimus is to automatically, and out-of-the-box support any networking equipment without having to manually feed all the information about it into the system. The overall design of Unimus, and specifically our Discovery mechanism make this possible on your networking devices.

However, there are some cases in which this is not possible - specifically when networking functions are provided as software running on a generic-purpose machine with a generic OS. For instance a piece of software installed on you Linux Server on Ubuntu, Debian, etc. While going through all installed packages on a Linux machine and properly identifying networking-related software is possible (at the cost of causing load on these machines), natively backing up all the possible configurations of all software packages is just about impossible. Unimus would have to understand the packaging specifics of all of these packages across all the Linux distributions, and would have to understand how config is structured (even when you can include external configuration through config files) for each of these packages, which is not realistic.

Lately we have seen multiple inquiries about specific networking packages and we decided that this is a good time to share a guide on how you can create and upload backups of almost any software config files into Unimus. The package we chose to feature in this article is a routing software suite FRRouting.

With its capabilities and availability across all major Linux distributions (including Debian / Ubuntu, CentOS, RHEL and more), FRR has a large user-base. Other similarly powerful networking focused software package of course exist, and you may want to keep their backups in one place - Unimus, which you use to back up all your networking equipment anyway. This is the good kind of centralization after all.

Let's get to the good stuff. While we are not able to support software such as FRR directly, you can use one of the features of Unimus to do so. Say hello to the Unimus API! With a bit of simple bash scripting on the host machines running FRR (or other software) you can collect and backup your configuration files and/or binary backups generated by such software and store them in Unimus. All the usual features of Unimus (like change management / change notifications, etc.) will work as expected.

Unimus FRR backup diff

Without any further ado, let us show you how we can do just that in a few steps.

STEP 1 - preparing Unimus and setting you up with an API token

As a first step, we want to prepare things in Unimus for our new device and also generate an API token to be able to submit API calls and upload our backups into Unimus. Let's start with the API token:

Unimus API token


Now, let's create our new device, which will represent a machine running FRRouting and set it to be unmanaged. We will create a device, specify its IP/hostname and add a description (helps with identification in your device list):

Unimus add device

If you see a message informing you about an unsuccessful discovery job, this is expected. This was an automatic discovery triggered as soon as we added the device and before we set it to be Unmanaged.

STEP 2 - getting familiar with Unimus API

Unimus' API is a powerful tool and many functions of Unimus are exposed through it. You can check the full API documentation here if you wish. We will use the API to our advantage here as well. The API function we are interested in is a function to create a new backup.

Starting from this point, you will need your Linux CLI. Let's start with a curl command we will use to upload our backups into Unimus:

curl -H "Accept: application/json" -H "Content-type: application/json" -H "Authorization: Bearer <token>" \
-d '{"backup":"<backup>","type":"<TEXT>"}' "http://example.unimus/api/v2/devices/<deviceId>/backups"

There are 5 parameters we need to provide in order to successfully push it:

<token>            - this is our API token we generated in step 1
<backup>           - this will be our encoded backup we will prepare in step 3
<TEXT>             - this will be a type of backup we will choose (BINARY/TEXT) also in step 3
<example.unimus>   - this is your Unimus server address
<deviceId>         - this will be an ID of our device we created in step 1

Now, let's get the ID of our newly created device representing our machine running FRR. We can use one of two functions to do so, one searching our device by its IP/hostname, the other one searching for our device by its description. Let's check out both:

Option 1 - searching device by its IP/hostname
https://wiki.unimus.net/display/UNPUB/Full+API+v.2+documentation#FullAPIv.2documentation-Devices-getdevicebyaddress

curl -H "Accept: application/json" -H "Authorization: Bearer <token>" \
"http://<example.unimus>/api/v2/devices/findByAddress/<address>?attr=s,c"

There are 3 parameters we need to insert in this command:

<token>            - this is our API token we generated in step 1
<example.unimus>   - this is Unimus' server address
<address>          - this is IP/hostname of our device

As per the example, let us show you our version of the curl call, inserting the API key, Unimus' address and device's IP:

curl -H "Accept: application/json" -H "Authorization: Bearer \
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJhdXRoMCJ9.ACruAhyEiipDrX7-QRsPfAJpsTooibm5RznqSHSMtuM" \
"http://10.10.10.10:8085/api/v2/devices/findByAddress/10.20.30.40?attr=s,c"

And here is a response we got:

"data":[{"id":234,"createTime":1629477261,"address":"10.20.30.40","description":"FRRouter@Deb9@123",
"schedule":null,"vendor":null,"type":null,"model":null,"lastJobStatus":"FAILED","connections":[]}],
"paginator":{"totalCount":1,"totalPages":1,"page":0,"size":20}}

Our device's ID is 234.

Option 2 - searching device by its description
https://wiki.unimus.net/display/UNPUB/Full+API+v.2+documentation#FullAPIv.2documentation-Devices-getdevicesbydescription

As in option 1, here's our version of a curl call, inserting the API key, Unimus' address and device's description:

curl -H "Accept: application/json" -H "Authorization: Bearer \
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJhdXRoMCJ9.ACruAhyEiipDrX7-QRsPfAJpsTooibm5RznqSHSMtuM" \
"http://10.10.10.10:8085/api/v2/devices/findByDescription/FRRouter@Deb9@123attr=s,c"

And here is a response we got:

"data":[{"id":234,"createTime":1629477261,"address":"10.20.30.40","description":"FRRouter@Deb9@123",
"schedule":null,"vendor":null,"type":null,"model":null,"lastJobStatus":"FAILED","connections":[]}],
"paginator":{"totalCount":1,"totalPages":1,"page":0,"size":20}}

Same as before, our device's ID is 234.

At this point, we know what API calls we need to use, we know our API token and we know our device's ID. Let's move to pushing a backup of our config to Unimus.

STEP 3 - preparing a backup and uploading it into Unimus

For this article we tested FRRouting as our weapon of choice and installed it on three Linux machines; running Debian 9, Ubuntu 20 and CentOS 7. We can happily report we haven't found any difference in configuration files' locations, which we will focus on below. This, however, may be different for the software you would look to backup with Unimus, thus always follow specific instructions for the installation and where backups and/or configuration files are stored.

FRR doesn't have a backup feature per se, instead as with much of other software packages, we can simply back up its configuration files. In case of FRR we will be backing up two configuration files:

/etc/frr/daemons
/etc/frr/frr.conf

We will do so in two ways to show you the two formats of backups you can choose to suit almost any scenario.

Method 1 - TEXT backup

One of the ways you can choose to create and upload your backup into Unimus is in a form of a text file. This method is generally recommended (if possible) as you will be able to see contents of this text file and receive appropriate configuration change notifications of its contents as well. In our case, we will be backing up two text files which we merge into one and upload it as a single text file. Here is a very simple script to do so:

#!/bin/bash

cd /tmp

#BACKUP PREP
echo -e "#BEGIN /etc/frr/daemons" > frrbackup.txt
cat /etc/frr/daemons >> frrbackup.txt
echo -e "#END /etc/frr/daemons\n\n\n" >> frrbackup.txt
echo -e "#BEGIN /etc/frr/frr.conf" >> frrbackup.txt
cat /etc/frr/frr.conf >> frrbackup.txt
echo -e "#END /etc/frr/frr.conf" >> frrbackup.txt

#BASE64 ENCODING
encodedbackup=$(base64 -w 0 frrbackup.txt)

#BACKUP PUSH INTO UNIMUS
curl -H "Accept: application/json" -H "Content-type: application/json" -H "Authorization: Bearer \
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJhdXRoMCJ9.ACruAhyEiipDrX7-QRsPfAJpsTooibm5RznqSHSMtuM" \
-d '{"backup":"'"$encodedbackup"'","type":"TEXT"}' "http://10.10.10.10:8085/api/v2/devices/234/backups"

#CLEANUP
rm frrbackup.txt

Here is a breakdown of the workflow of this script:

  • First it prepares a backup file with some additional formatting so that it is easier to distinguish beginning/end of each in the final text file.
  • Then a content of the prepared backup text file is passed to BASE64 encoder and loaded into a variable - this encoding is very important as it encodes the contents into a single streamlined string of characters allowing us to move it efficiently. Note, BASE64 encoding doesn't encrypt the content of your files, anyone could decode it with any BASE64 decoder.
  • Then using curl call from step 2 we fill in all parameters required with actual data, and changed backup type to TEXT - note the use of extra single/double quotes to insert the variable containing our encoded backup, This format is important so that the variable is processed correctly.
  • Lastly we clean up.

We can now run the script. As specified in our API documentation, we are expecting this output:

{"data":{"success":"true"}}

If you don't see this output, it indicates there was a problem sending the backup. Refer to our API documentation to find more information if needed. Now, let's check Unimus and see if we got our backup and all is readable:

Unimus add device

We properly see the content of this text file - Unimus automatically decodes it our BASE64 encoded string. We can download the backup, send it or check diffs in case something changes - just like any other backup.

Method 2 - BINARY backup

The second format of backup is in form of a binary file. Binary backups can be useful if your software generates has configuration files spread in multiple files across multiple formats so generating a single TXT file is not feasible. In such case packing all files into a single archive and uploading the archive is the way to go.

#!/bin/bash

cd /tmp

#BACKUP PREP
tar -czvf frrbackup.tar.gz -C /etc/frr/ daemons frr.conf

#BASE64 ENCODING
encodedbackup=$(base64 -w 0 frrbackup.tar.gz)

#BACKUP PUSH INTO UNIMUS
curl -H "Accept: application/json" -H "Content-type: application/json" -H "Authorization: Bearer \
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJhdXRoMCJ9.ACruAhyEiipDrX7-QRsPfAJpsTooibm5RznqSHSMtuM" \
-d '{"backup":"'"$encodedbackup"'","type":"BINARY"}' "http://10.10.10.10:8085/api/v2/devices/234/backups"

#CLEANUP
rm frrbackup.tar.gz

Here is a breakdown of the workflow of this script:

  • First it prepares and packs our target files into a single .tar.gz archive, but you can use other formats if you prefer different archiver/compressor.
  • Then it is passed to a BASE64 encoder and loaded into a variable - this encoding is very important as it allows the binary file to be transferred as a single streamlined string of characters. Note, BASE64 encoding doesn't encrypt the content of your files, anyone could decode it with any BASE64 decoder.
  • Then using the curl call from step 2 we fill in all parameters required with actual data, and change backup type to BINARY - note the use of extra single/double quotes to insert the variable containing our encoded backup, This format is important so that the variable is processed correctly.
  • Lastly we clean up.

We can now run this script. Again we are expecting this output:

{"data":{"success":"true"}}

If you don't see this output, it indicates there was a problem sending the backup. Refer to our API documentation to find more information if needed. Now, let's check Unimus and see if we got our backup and what we can do with it:

Unimus add device

As you can see, Unimus received our backup, however compared to a text backup, we cannot see contents of this file and that is because it is a binary file and it could be in any format - .tar.gz, .bin, .zip, etc. We can still download or send it. If there is a change to this binary file we will see a difference in SHA1 sum. Note, when downloading a binary backup, Unimus will not append any extension to this file, we recommend renaming it right away.

STEP 4 - job scheduling

One-time execution of scripts is nice, however just as with any backup job, we want it to be run periodically. The last part of this article will be adding a scheduled job to Cron. Depending on your software and/or user privileges you have, you might need to change the way to set up a cron job, e.g. via user-specific jobs using "crontab -e". Since with our setup we need root privileges for accessing cron configuration files, we will add a job to /etc/crontab directly, and set our script to run every night at 3AM (just like our default schedule in Unimus).

0 3 * * *    root    /root/frrbackup.sh

And that's it! We have set up a device for FRR in Unimus, and created an automated way to generate and upload backups into Unimus.

Final words

We hope this article can serve as a template that can be used to upload any files / backups into Unimus. If you have any questions, or run into any issues using the examples in this article, please feel free to post in the Automation section of our Forums.

]]>
<![CDATA[ Release Overview - Unimus 2.1.0 ]]> https://blog.unimus.net/release-overview-unimus-2-1-0/ 615c40c8ed7c6b0001874bf4 Sun, 15 Aug 2021 12:14:00 +0000 With each new release, we also upload a release overview video, so if you prefer a video format, you can find it here:
Youtube - 2.1.0 Release Overview video

For those who prefer readable content, read on!


"Backup filters" feature

Backup filters

You can now create custom backup filters, which allow you to ignore or completely delete parts of the backup received from your devices. This can be useful if you do not want to store some data that the device outputs, or if you want to ignore parts of the backup for config change notifications. For example, if your device outputs some data that is different on every backup, you can create ignore filters to tell Unimus this data should be ignored. Using the backup filters, you can create completely custom rules on what to ignore or delete to cut down on change notifications.

For more information and examples, we have a dedicated blog article.


"NMS Sync" upgraded to preset-based

NMS sync

We have migrated NMS sync to use presets, instead of static configuration. This provides multiple benefits. For example, you can now configure sync from as many NMS systems of the same type as you like simply by creating more presets. The second benefit is that NMS sync is now fully compatible with Zones. You can specify to which Zone an NMS Sync preset should import to. These updates make NMS Sync very flexible. You can import all your customers' networks from a single NMS into multiple Unimus Zones. Or you can import from multiple NMS systems, each representing a separate Zone.

For more information and examples, please check our NMS Sync blog article.


"Advanced Settings" for Mass Config Push

Push advanced settings

We have added a new "Advanced Settings" window to Config Push. In most use cases our default Config Push behavior was sufficient, and worked as expected. However, in a few specific cases there was a need to fine tune how Config Push behaved, and the new Advanced Settings allow these adjustments.

These new settings are covered in this blog article.


NetXMS Agent as a proxy for Zones

Push advanced settings

In 2.1, we are adding the option to use a NetXMS Agent as a Unimus Zone proxy. If you are using NetXMS and already have an Agent deployed in a remote network, you don't need to deploy Unimus Core - you can just use your existing Agent. This cuts down on the amount of software you need to deploy, and if you already use NetXMS, you can now onboard your networks into Unimus much faster.

More details in the documentation on our Wiki.


Other minor new features and improvements

We have also added many new minor features, such as improvements to notifications, UX improvements to Config Push, warnings when an older version Core is connected, performance improvements, and many other UI and UX improvements across Unimus. Head down to the full Changelog to learn more!


Security improvements and bug fixes

We focused heavily on security in 2.1, as outlined in our Update on Unimus codebase and release security article earlier this year. We have updated both our backend and frontend frameworks to latest LTS releases, reviewed Unimus build security and infrastructure and introduced code-signing to all binary releases.


With each new release, we also add support for new network vendors and devices. This release is no different and we are adding support for 21 new device types, from 15 separate networking vendors.

The Changelog for 2.1.0 is quite large, so this article doesn't cover it completely. If you want to see all the changes in this release, please check the full Changelog below:

Full changelog:

= Version 2.1.0 =
Features:
  Added notifications if a Zone goes offline (can be configured in the "Notifications" screen)
  You can now select which Zone the "Basic import" imports devices into
  Config Push History (on the Dashboard) now shows which user ran the Push, or if it was scheduled
  Improved Diff performance (diffs with a large change-set could be very slow)
  Improved Import / NMS Sync handling - import/sync jobs are now queued, so a single job doesn't block you from queueing others
  You can now disable the Core connection listener if desired (if not using remote Cores)
  The "get" endpoints for "devices" in the API ("/api/v2/devices/...") now also return the status of the last device job
  "Sensitive data stripping" has been moved from "Other settings" to "Backups > Configuration"
  "Advanced settings" > "Discover un-discovered devices when new Credentials are added", "added" was changed to "added or bound"
  Improved Output Group matching in Mass Config Push
  Added new icons for Comments / Tags / Filters in all tables
  Added "unimus.core.tcp.connect-timeout" config option to control Core->Unimus connection timeout (default 5 seconds)
  You can now disable specific job types (Discovery, Backup, Scan, Config Push) if desired
  Added support for devices which ask "Configure from terminal?" when the "configure" command is sent
  On JunOS devices, "monitor stop" will be sent before backup to make sure logs are not present in backup
  Improved support for Adtran NetVanta devices
  Improved support for Datacom devices
  Improved support for ExtremeWare devices
  Improved support for Quanta devices

  New "Custom Backup Filters" feature:
    - you can create custom filtering rules on backups to filter any data you don't want inside backups
    - both completely deleting and/or replacing data for a filtered text are available
    - allows for creation of rules based on Tags, device vendors or device types
    - https://unimus.net/blog/backup-filters-unimus-210

  "NMS Sync" is now configured using Presets, and now properly works with Zones
    - you can now define as many NMS Sync connections as you like
    - fully integrated with Zones, you can now use NMS Sync to sync devices to multiple Zones
    - existing configuration automatically migrated to Presets
    - https://unimus.net/blog/nms-sync-improvements-unimus-210

  Ability to use the NetXMS Agent as a proxy for Zones:
    - you can use a NetXMS Agent as a poller for an Unimus Zone instead of an Unimus Core
    - if you use NetXMS, you no longer need to deploy both a NetXMS Agent and an Unimus Core for the Zone
    - https://wiki.unimus.net/display/UNPUB/NetXMS+Agent+as+Zone+Proxy

  New "Advanced Settings" feature for Mass Config Push:
    - allows overriding credentials used to connect to devices by this Push Preset
    - allows overriding timeouts used in device communication by this Push Preset
    - allows settings the prompt matching mode used by this Push Preset
    - https://unimus.net/blog/config-push-advanced-settings-unimus-210

  Unimus Core version is now checked by Unimus and shown in Zones
    - added versioning to the Core communication protocol
    - Unimus now checks if Cores are using a supported version during connection
    - Unimus will notify on the Dashboard if any "older" version Cores are connected
    - Core version is now shown in the "Zones" screen (if the Zone is using an Unimus Core as it's proxy)

  Added support for:
    - Adit 600 series
    - Brocade G620
    - Cisco FirePOWER for AWS
    - Datacom DmOS devices
    - Datacom DmSwitch devices
    - D-Link DXS 5000 series
    - Extreme 200 series
    - Extreme VOSS / VSP OS
    - Extreme Wing AP 510
    - Fiberhome devices
    - Fiberstore (FS.com) Campus switches
    - Fortinet FortiWeb
    - IBM Flex System Fabric
    - IBM RackSwitch
    - Netgear GSM switches
    - Nomadix EG devices
    - Nomadix NSE devices
    - Siklu Terragraph
    - Ubiquiti airFiber 60 5G
    - Ubiquiti airFiber 60 LR
    - Ubiquiti GigaBeam
    - ZTE ZXA devices

Fixes:
  Fixed Slack notifications not working with new Slack Apps (changes in Slack API for new Apps)
  Fixed issue when writing into an input box, a desync may occur that caused a character to get lost, and the cursor to jump to the start of the input box
  Fixed built-in backup filtering in rare cases could add many "<--filtered-->" text instances into a backup
  Fixed config change notification not sent if a backup was pushed over the API
  Fixed API limited max page size to 50, even if user specified a much larger size
  Fixed Import and/or NMS Sync could get stuck if there was an internal error during import/sync
  Fixed Unimus could stop working on HSQL after a long period of time if data retention cleanup settings were enabled
  Fixed running discovery/backups on all devices over the API did not work (single device requests worked properly)
  Fixed Import and NMS Sync running UI notifications could get lost when moving around the application
  Fixed wrong Config Change Notifications on specific Cisco IOS versions
  Fixed wrong Config Change Notifications when a few specific config items were present on F5 devices
  Fixed "Backup it very long, do you want to continue?" warning boxes not working properly
  Fixed errors when trying to add extremely long passwords (over 130) characters in the "Credentials" screen
  Fixed inconsistent case sensitivity in Config Search (normal matching is now always CI, regex matching is done per regex settings)
  Fixed Config Search showing whole backup when Context Size was set to 0
  Fixed multiple edge-case issues and errors in Config Search
  Fixed multiple edge-cases where a device address with a whitespace was not properly trimmed
  Fixed checkbox and selection not being properly reloaded after refresh (F5)
  Fixed "last run" value not being updated in a Config Push preset in some circumstances
  Fixed multiple UI issues (element overflows, wrong element sizing on small resolutions) in Config Push
  Fixed "$[no-wait]" not properly working in Config Push under certain circumstances
  Fixed Discovery failing on newer firmware version of HP/HPE ProCurve devices
  Fixed devices that used "enter" as pagination not working over Telnet
  Fixed jobs failing on newer Adtran NetVanta devices
  Fixed jobs failing on a few specific HP Comware devices
  Fixed jobs failing on a few specific devices over Telnet
  Fixed jobs failing on specific configurations of ExtremeWare devices

Security fixes:
  Upgraded the frontend framework to the latest LTS version
  Upgraded the backend framework to the latest LTS version
  Fixed not properly invalidating all sessions of a logged-in account if it was removed (sessions would work until session timeout)
  Fixed users being able to see Tags they did not have access to in Config Search (only list of Tags affected, search results were properly secured)

Embedded Core version:
  2.1.0

Known issues:
  ISSUE: When many jobs are running, and you scroll or re-order the Devices table, some rows can be duplicated and/or malformed
  WORKAROUND: table re-render fixes this - scroll out and back in, or reorder again and elements will rerender properly
  STATUS: issues in framework after upgrade to latest LTS - we are investigating how to fix :(

  ISSUE: Opening a Config Push preset is slow and locks the UI session with large results (1000+ devices)
  WORKAROUND: none
  STATUS: issue scheduled for fix in next version

  ISSUE: Search in Config Push preset outputs is slow and locks the UI session with large results (1000+ devices)
  WORKAROUND: none
  STATUS: issue scheduled for fix in next version

  ISSUE: Device ownership (Owner) is not properly set when using Basic Import
  WORKAROUND: none
  STATUS: issue scheduled for fix in next version

  ISSUE: An API call to the "devices" endpoint using PATCH doesn't work
  WORKAROUND: none
  STATUS: issue scheduled for fix in next version

  ISSUE: Already logged in user gets 'Access denied' when they manually navigate to '/'
  WORKAROUND: none
  STATUS: issue scheduled for fix in next version

  ISSUE: When moving many devices across Zones, an error can occur
  WORKAROUND: none, retry the move again
  STATUS: issue scheduled for fix in next version

  ISSUE: Notifications don't contain a list of addresses of filtered devices (no connectors, Core offline, etc.)
  WORKAROUND: none
  STATUS: issue scheduled for fix in next version

  ISSUE: Config push creation - 'Require "enable" mode' is not automatically checked when 'Require "configure"' is checked
  WORKAROUND: none, works properly after preset has already been created
  STATUS: issue scheduled for fix in next version

  ISSUE: Data put into HTTP query parameters are not escaped in NMS Sync importers that use HTTP protocols
  WORKAROUND: none
  STATUS: issue scheduled for fix in next version

  ISSUE: Sorting by Job Status in Devices does not work properly if Unmanaged devices are present
  WORKAROUND: none
  STATUS: issue scheduled for fix in next version

  ISSUE: Some screens in Unimus show time in server's time zone, others in client's (browser's) time zone
  WORKAROUND: none, issue only relevant if client has different time zone than server
  STATUS: we are debating on how to fix this - will likely create a setting to select which TZ should be used
        
]]>
<![CDATA[ Config Push - new Advanced Settings in Unimus 2.1.0 ]]> https://blog.unimus.net/config-push-new-advanced-settings-in-unimus-2-1-0/ 6155a969ed7c6b0001874b80 Thu, 01 Jul 2021 08:14:00 +0000 Based on user requests, we are adding a few new Advanced Settings to Config Push in Unimus 2.1.0. For the vast majority of use cases the default Config Push behavior doesn't need adjustments and works as expected. However, in a few specific cases there was a need to fine tune how Config Push behaves, and the new Advanced Settings allow those adjustments.

Prompt recognition mode

Normally, Unimus learns the full prompt of the device, and waits for it before sending the next command from the Push preset (unless the $[no-wait] modifier is used). Check out the Mass Config Push documentation on the Wiki for more details.

Here is an example of device communication during a Push. In the Config Push preset, we would check Require enable (privileged-exec) mode and Require configure (configuration) mode, and provide these commands:

interface ethernet 1/1
description "Server Link"
exit

Here is how the device communication would looks like:

switch-rack1> <enter>                                    # "device-rack1" is learned as the hostname
switch-rack1> enable<enter>                              # bring devices into desired CLI mode (enable first)
switch-rack1# <enter>
switch-rack1# configure terminal<enter>                  # now bring devices to configure mode
switch-rack1(config)#interface ethernet 1/1<enter>       # device is in desired CLI mode, send commands
switch-rack1(config-if)#description "Server Link"<enter>
switch-rack1(config-if)#exit
switch-rack1(config)#

The prompt learning behavior in Unimus exists to make sure Unimus knows when to interact with the device (send commands to it) and when to wait and collect output from the devices while it's outputting the output of those commands. There is however one case where this is not desired, when you actually want to change the hostname (and therefore change the prompt) on the device. Here is an example:

Commands in Config Push preset:

hostname switch-backup-r1
write memory

Here is how the device communication would looks like (skipping the CLI mode changes):

...
switch-rack1(config)#hostname switch-backup-r1<enter>
switch-backup-r1(config)#                             # Unimus fails here, unable to recognize new prompt

The issue here is that even tho the change was deployed, Unimus failed to recognize the new prompt, and the write memory command was not sent. This is where the new Prompt recognition mode setting comes into effect. You can now set the prompt to Simple recognition mode, which will make Unimus use a much simpler prompt recognition method (just looking at the ending character), and the above preset would work as expected.

Please note the Learning prompt mode should be used whenever possible, as it makes Config Push much more reliable. You should only use the Simple recognition mode when you know the hostname / prompt of the device will change as a part of the Push.

Overriding timeouts

As described in our Wiki article on timeout configuration, Unimus uses multiple different timeouts when communicating with your devices. For some very long operations however, you might want to increase this timeout - since if the device takes too long to finish outputting some output, or if some operation on the device (such as saving it's config) takes longer than the timeout, the job would be considered as failed. Here is an example:

copy tftp://server.local/file nvram:file
y$[no-enter]

In this example, we are copying a file from a remote server to all devices this Push will be executed on. Here is how the device communication would look like:

(lab-swch1) #copy tftp://server.local/file nvram:file<enter>

Mode........................................... TFTP
Set Server IP.................................. server.local
Path........................................... ./
Filename....................................... file
Data Type...................................... file

Management access will be blocked for the duration of the transfer
Are you sure you want to start? (y/n) <y>

#
# this operation take a very long time - over 1 minute
#

File transfer operation completed successfully.

(lab-swch1) #

Since the file transfer in this case takes over 1 minute, the job would fail due to the timeout running out. In this case, checking Override timeouts and setting the timeouts to 100000 (100 seconds) would fix the failure. Please note that the override will set ALL the timeouts to the specified value (more in the Wiki article).

Overriding credentials

The final new option in the Advanced Settings is the ability to override the credentials Unimus will use when communicating with the device. This is useful in cases when you have read-only credentials in the Credentials screen for security purposes. You do however want to be able to use Config Push, but without having full credentials available system-wide. In this case, you can check the Override credentials box, and provide whichever of the available credentials you want to override. You can keep the credentials you don't want to override empty.

Final words

We hope these settings provide another bit of missing flexibility to Config Push and fix a few edge-cases which were Config Push could fail before, These new Advanced Settings are available in Config Push starting with Unimus 2.1.0. Please head over to the Download section to download the latest Unimus release.

]]>
<![CDATA[ Improvements to NMS Sync in Unimus 2.1.0 ]]> https://blog.unimus.net/nms-sync-improvements-unimus-210/ 6155a969ed7c6b0001874b7f Fri, 11 Jun 2021 08:11:00 +0000 The NMS Sync functionality was originally intended to ease device management (adding / removing devices) and onboarding of networks into Unimus. If you use a monitoring system (NMS / RMM), your network infrastructure should already be present in your NMS - so instead of having to manually add your network(s) into Unimus, you could just configure Unimus to pull in devices from your NMS. This makes it much easier and faster to deploy Unimus into new network(s), but also automates the ongoing management of devices in Unimus. If you deploy a new device, you can just add it to your NMS, and Unimus will automatically adopt it from there instead of having to manually add the device to multiple systems.

NMS Sync was originally introduced all the way back in 0.1.3, and while we have been adding connectors for new NMS systems continually over the years, the way how NMS Sync was configured and worked has remained mostly the same. With the introduction of Zones in 2.0 however, the way NMS Sync was implemented started to show it's age.

We started seeing requests for importing to different Zones from different containers / tags inside an NMS, or to import into a single Unimus instance from multiple NMSes. With 2.1, we have reworked how NMS Sync is configured and works, and it should now be flexible enough to support a wide range of use-cases.

Sync Presets

We have changed NMS Sync configuration to be preset-based. A preset defines a single policy of where from, what, and where to import. You can create as many presets as you would like, pointing to a single NMS system, or pointing to multiple different NMS systems.

Here is how a configured preset looks like:

Unimus NMS sync preset

The left section defines where to import from - the details of the NMS itself. The right section defines Sync rules, or what to import, and where to import it to.

Sync Rules

Sync Rules tell the Sync Presets from where inside the NMS to adopt devices from, and to which Zone in Unimus these devices should be imported into. A single Sync Rule can specify multiple sources from within the NMS - for example multiple containers, or Tags or tree roots where devices in the NMS are located. For each NMS we support, the options are a little different, as it depends on how devices inside the NMS are organized. You can create as many rules for a single preset as you need.

Here is an example where we configure 2 rules for sync from Zabbix. One rule imports from the Internal infrastructure Zabbix group into the Default Zone in Unimus, and the 2nd rule import from 2 Zabbix groups (`Managed routers` and Customer CPEs) into Zone 3 in Unimus:

Unimus NMS sync preset

Examples

The easiest example is if you have a single NMS system and would like to import devices only to a single Zone in Unimus. In this case, a single Sync Preset with a single Sync Rule will be completely sufficient. Just setup your NMS details, and create a single Sync Rule. Inside the Sync Rule, you can setup multiple sources (containers, tags or IDs) from that NMS, and point them all to the Default Zone.

If you have a single NMS system, but would like to import different nodes from the NMS into different Zones in Unimus, a single Sync Preset will also be sufficient. You can create multiple Sync Rules, each rule would define the sources inside the NMS, and would point the devices to be imported into separate Zones. So in the end, you would create as many Sync Rules as you have Zones in Unimus you want to import devices into.

Finally, If you have multiple NMS systems from which you would like to import to Unimus, you can create multiple Sync Presets - one for each NMS. Inside these presets, you can create rules that define which devices from each of the NMSes (presets) are adopted, and into which Zone they should be imported to.

Supported NMSes

As of 2.1.0, Unimus supports syncing from 7 different NMS systems:

  • NetXMS
  • Zabbix
  • PRTG
  • LibreNMS
  • Panopta
  • Powercode
  • Observium
Unimus NMS sync preset

If you use a different NMS system that we don't have support for yet, please get in touch with us!

Migration

If you already use Unimus and have an existing NMS Sync configuration, upgrading to 2.1 will automatically migrate your configuration to presets. After the upgrade, you don't need to do any manual reconfiguration - everything should continue working as expected.

The updates to NMS Sync are available starting with Unimus 2.1.0. Please head over to the Download section to download the latest Unimus release.

]]>
<![CDATA[ New Backup Filters feature in Unimus 2.1.0 ]]> https://blog.unimus.net/new-backup-filters-feature-in-unimus-2-1-0/ 6155a969ed7c6b0001874b7e Wed, 09 Jun 2021 08:08:00 +0000 To fully explain when and how the new Backup Filters are useful, let's start with an overview of how Unimus stores backups for your devices without using the new filters. Normally, the backup procedure Unimus performs works as follows:

  1. connect to the device
  2. switch to a desired CLI mode to perform backup
  3. retrieve configuration from the device (for example show running-config)
  4. remove pagination, perform formatting, etc.
  5. remove dynamic contents from the backup
  6. compare retrieved config to currently known config of the device
  7. create a new configuration point (backup) or update existing

The last step of this process is the most complex one. The "backups" that Unimus retrieves are used to build a versioned configuration history for your device. Or to simplify - if there are no changes to the config of your device, a new "backup" point is not required. So rather than show you individual backups, Unimus will show you configuration points - ranges of valid configurations on your device to build a configuration timeline of the device. Any point in this timeline is a valid backup, but you can also see when and how the config of your device changes over time. This is actually more difficult to explain than it is in reality - simply you see a timeline of how your device was configured over time, and you can use this to restore to any point in the past.

Here is an example of a configuration timeline for a device:

Unimus device config timeline

We can see that this particular device has had 9 unique configuration points since we added it to Unimus in Dec. 2019. The current configuration of this device (the one on top) has been applied to the device on Apr. 9th, and is valid up until today (Jun 9th).

To be able to do this, Unimus needs to be able to figure out if there was a config change on the device, which is not as trivial as it might seem. For example, if the configuration of the device contains timestamp (Cisco IOS will show the current timestamp of when show running-config was executed), this needs to be ignored so this "change" in the configuration contents doesn't create a new configuration point. Over the years, we have built a large set of what we call "dynamic content filters" - and actually for each device type we support in Unimus, we write these dynamic content filters as a part of adding support for a device.

While our built-in filters work properly in the vast majority of cases, sometimes you might have dynamic data as a part of your configuration - some output that is unique on each configuration printout. This can break the configuration timeline for your device, as Unimus will consider each backup run as a unique configuration point, which makes the configuration timeline pretty useless.

This is where the new Backup Filters come in - you can define rules which will tell Unimus that a part of the configuration should be ignored for the purposes of comparing the current known config to the new received config.

How the new Backup Filters work

There are 2 types of Backup Filters you can define:

  • Deleted data filters
  • Ignored data filters

How Deleted data filters work - before comparison of the new backup to the currently stored backup, Unimus will REMOVE the matched data from the "new" backup. This means that this data will not be stored as a part of the backup at all - as if this data was not received from the device at all.

How Ignored data filters work - during comparison of the new backup to the currently stored backup, Unimus will IGNORE the matched data for the comparison purposes. The data will still be stored as a part of the backup, but if changes in the data happen, those changes will be ignored.

The order of filtering is:

  1. Deleted data filters
  2. Built-in "dynamic content filters"
  3. Ignored data filters

In the previous section of the article we described how the Ignored data filters can be used to ignore changes to parts of the backup. The Deleted data filters on the other hand can be used to completely remove some parts of the backup. For example, if your device outputs the DHCP lease database as a part of it's configuration, but you don't want to store that in Unimus, you could create "Deleted data" filters that would filter out the appropriate lines from the config. In effect, Unimus would then never store this part of the config of the device.

How to configure Backup Filters

The Backup Filters can be added in the new Backup > Configuration window:

Backup filter configuration

After clicking to add a new filter, you have multiple options of how and to what it should apply:

New backup filter

There are a few ways to filter:

  • Line starts with
  • Line ends with
  • Regex

For the "line" filters, if there is a match, the whole line is filtered / deleted. Regex is a bit more flexible. If there are no capture groups in the regex, the whole regex match is filtered / deleted. However if you use capture groups, only the content captured by the capture groups will be filtered / deleted. This allows for very precise and flexible matching using regex and capture groups, so you can filter / delete only exactly what you need.

There are also multiple ways on how the filter applies:

  • Vendor
  • Device Type
  • Tag

If a vendor is chosen, then the filter will apply to all devices from that vendor. If a device type is chosen, the filter will apply to all devices of that device type. Finally you can choose to apply the filter only to devices tagged with a specific Tag, for targeted filtering on specific devices.

Example

Filtering changes to IPSec secrets. In this example, we setup an IGNORE filter for IPSec passwords:

MikroTik ipsec filter

You will notice this is using a "Regex" filter, and using a capture group to only filter the actual password (note also the use of a non-capturing group for more flexible matching). As described earlier, this would cause Unimus to IGNORE and changes of IPSec password, and not create new configurations if these passwords are changed. HOWEVER, since this is an IGNORE filter, the configuration will still be present in your backups:

Unimus filtered diff

We could uncheck the Filter dynamic content checkbox to show the actual cleartext which was saved. If we wanted to completely remove these secrets from the backups, we could have used a DELETE filter instead of an IGNORE filter.

The new Backup Filters feature is available starting with Unimus 2.1.0. Please head over to the Download section to download the latest Unimus release.

]]>
<![CDATA[ New Mass Config Push features in Unimus 2.0.11 ]]> https://blog.unimus.net/new-mass-config-push-features-in-unimus-2-0-11/ 6155a969ed7c6b0001874b7d Tue, 09 Mar 2021 07:46:00 +0000 Starting with Unimus 2.0.11, we have added 2 major quality-of-life improvements to our Mass Config Push:

  • you can now use Tags to push to groups of devices
  • new omni-search for easily finding anything in push results

Let's look at each of these in more details...

New "Targets" configuration for Config Push:

In previous Unimus versions, you would choose individual devices, and "Add" them to a Push preset to tell Unimus which devices it should push to. We have changed this to a new "Targets" configuration. Rather than individual devices, you now define Targets - which can either be devices and / or Tags. Unimus will take all the devices from all the tags, and the individual devices, and create a final device set to push to.

Here is an UI screenshot of how this configuration looks like:

Config Push targets

Here is an example of how to configure this in practice:

Config Push targets in practise


Omni-search for Config Push results

The other new feature is an universal "Search" box for push results. This will search across any data in a push results - group names, device addresses, descriptions, and all outputs in all output groups. Simply - anything you want to find, just type into the search box.

Here is an example of using the search to first find a part of the output, and then to find all devices whose description contains a particular string:

Config Push search

Both of these features are available in Unimus 2.0.11 released today.

]]>
<![CDATA[ Update on Unimus codebase and release security ]]> https://blog.unimus.net/update-on-unimus-codebase-and-release-security/ 6155a969ed7c6b0001874b7c Mon, 15 Feb 2021 07:43:00 +0000 Most of our readers (Hello!) will be familiar with the SolarWinds saga. In December 2020, SolarWinds announced that it's Orion software was exploited in a supply-chain attack. This FireEye article has a nice write-up of the original attack, called SUNBURST. Since then, at least 2 other malicious payloads were found present in Orion - first SUPERNOVA was discovered, and later on Raindrop and Teardrop were also discovered hiding inside the Orion executables.

Many network were affected, and I'm sure this resulted in a significant ammount of work for many of you reading this article. CISA at one point recommended all systems accessed by Orion in government agencies be rebuilt from scratch (latest CISA guidelines can be found here).

Security in infrastructure management tools is extremely important. We take incidents like these very seriously, and want to do everything we can to make sure something like this doesn't happen to Unimus. As such, we wanted to publish an article on what we have been doing in the past months to make sure our systems and Unimus itself are safe, and what we plan to do going forward to.

Our (NetCore j.s.a.) systems are separated into 3 different groups:

  • Unimus instances customers run in their network (we have no access to these)
  • Our websites, Portal and Licensing Server (our public resources)
  • Our internal environments, tools, servers, workstations, etc. (our internal resources)

Let's start with Unimus:

We have audited our codebase and our build process, and we can happily report that we have found no issues and no signs of tampering nor malicious activity on any systems involved in the Unimus development or build process.

We do however see areas for improvements:

  • Some of the dependencies / libraries we use are not the latest available versions. This includes our backend and frontend frameworks. To fix this, we are updating all dependencies / libraries to the latest versions. This is actually a large amount of work, since both frontend and backend frameworks have new major LTS releases available. The dev team has been working since December migrating to these new LTS versions.
  • We can improve a lot in the code-signing area. Unimus is built from 12 different modules. We will implement code-signing on the module-level, and validate module signage when the final Unimus binary is built out of the individual modules.
  • We will be introducing a Bug Bounty / Security Bounty program for Unimus. More on this to come soon.

In general, we see the security of the Unimus codebase and the build process as good, with areas for improvement that we are now working on. We are giving this priority, so all of the above mentioned improvements will be coming sooner rather than later.

Websites, Portal and Licensing Server:

We have audited all the servers running our public services, as well as the Portal and the Licensing Server codebases and build processes. We can also happily report that we have found no issues and no signs of tampering nor malicious activity on any of our public systems.

Since the Portal holds customer data (your billing info), we want to make sure all your data with us is properly protected (we don't hold any payment data on our Portal directly). We however see the same areas for improvement for our Portal and Licensing server as for Unimus. As such, the dev team has also been migrating both these services to the latest LTS versions of all dependencies / libraries / frameworks. We will also be implementing the same per-module code-signing and validation processes for our Portal and Licensing Server as we discussed for Unimus.

In keeping with full transparency, we also recently published a post on our forums explaining what data our Licensing Server collects from local Unimus instances. You can find the post here.

Finally, we will also launch a Bug Bounty / Security Bounty program for the Portal. More details on this will be released soon, together with the same program for Unimus itself.

Our internal environments, servers, workstations, etc:

We have also reviewed our internal systems and assured that there is no outside access (other than our offices and our VPNs) to these systems. We have found no issues and no signs of external access or tampering with our internal systems.

We are also continuing to educate our staff on security best-practices, and we heavily value and encourage a security-minded culture in our company. The internal culture and mindset of our company in regards to security is very important to us, and we will increase investment in internal and external security trainings to make sure all our developers and staff stay mindful of security best-practices going foward.

To summarize:

  • We audited the Unimus codebase and build process and found no security issues
  • We audited all our public servers and services and found no security issues
  • We plan to introduce more code-signing and integrity checks into the Unimus build process
  • We are updating all dependencies / libraries to the latest versions across all our software / services
  • We plan to start a Bug Bounty / Security Bounty program

We are happy to answer any security-related questions you might have; and we would also love to hear if you have any feedback or suggestions on what you think we should do better. Please feel free to post any feedback in this forum topic. Thanks!

]]>
<![CDATA[ Release Overview - Unimus 2.0.0 ]]> https://blog.unimus.net/release-overview-unimus-2-0-0/ 6155a969ed7c6b0001874b7b Fri, 15 May 2020 07:07:00 +0000 This article highlights the most significant changes and new major features in the Unimus 2.0.0 and Unimus Core release.

With each new release, we also upload a release overview video, so if you prefer a video format, you can find it here: Youtube - 2.0.0 Release Overview video

For those who prefer readable content, read on!


“Zones” feature and multi-tenancy

"Zones" add support for multi-tenancy, remote networks and distributed polling. You can have a central Unimus server to manage many remote networks, or you can split devices in your network across Zones, with each Zone using a separate Core to spread load from your server across multiple poller Cores.

Unimus Zones

Unimus Core and remote networks

The Core can serve as a remote poller / remote agent for Unimus, and the Zones you create in Unimus can either be polled directly from the Unimus server, or from a Unimus Core. You can check the new Architecture Overview and Zones articles on our Wiki for more details on Zones and Unimus Core.

Unimus & Unimus Core architecture

Full Config Change Notifications over Slack

Previously Unimus would only send config change summaries over Slack, but due to community demand, we have implemented full diffs over Slack. You will need to reconfigure the Slack notification sender in Unimus to use a Slack Bot. Please check our blog article here on how to create and integrate a Slack Bot with Unimus.

Slack diffs

Mass Config Push scheduling

You can now easily schedule Config Push jobs from you already existing Push Presets. This heavily extends the automation capabilities of Unimus, allowing you to schedule any configuration deployments. Push results are available in the Push Preset, and in a new “Config Push history” table on the Dashboard.

Config Push Scheduling

Support for binary backups

With 2.0.0, we have added support for storing binary backup files, and also extended all systems in Unimus to support binary backups. This means change detection, diffs, notifications, and everything else will properly work with binary backups.

Unimus binary backup

Push binary backups into Unimus

With support for binary backups, we have also added a new API end-point, which allows you to push binary files to Unimus as device backups. This opens new use-cases for Unimus, as you can now push files to Unimus from external systems or scripts, and Unimus will perform change detection, notifications, and all other functions as expected.

Unimus Binary Backup Push

Push text backups into Unimus

The new API endpoint also supports pushing text files to Unimus, which allows you to extend Unimus with support for any device even if we don't support it directly. You can script backup retrieval yourself, and push the resulting backup file to Unimus for processing, storage, notifications, etc...

Unimus Text Backup Push

PRTG and Observium in NMS Sync

We continue to add new sync connectors to NMS sync, and with this release, you can now adopt devices into Unimus from PRTG or Observium. We are adding new connectors with each Unimus release, so if your NMS doesn't yet have a connector available, please keep an eye on our Roadmap and future changelogs.

Unimus New NMS Sync

UI and UX improvements

There are also MANY UI and user experience improvements in this release. Some of the more notable ones are the new "Device Info" table in Devices, last job status indicator in Devices, much more detailed job failure logs for failed Discoveries, new result history tables on the Dashboard, and the "Export backups" functionality in the "Backup" screen.

UI UX improvements

Performance improvements

We have worked hard to improve the UI performance of Unimus in 2.0.0. The UI should now be much more responsive when working with many devices. Specifically large tables should load much faster - such as Devices, Backups, etc. Config Search server-side logic has been also improved to deliver search results faster.

Bug squashing and issue fixing

We have fixed more than 50 various bugs, issues and UI problems in 2.0.0, some of which have been present since 1.0.0. There are also security fixes for users with Tag-based access restrictions. A big thank-you here goes to everyone testing the 2.0 Beta and RC releases, and helping us iron out all of these.


With each new release, we add support for new network vendors and devices. This time around, we are releasing support for 22 new device types, across multiple networking vendors.
           
The Changelog for 2.0.0 is quite large, and this article doesn't cover it completely. If you want to see the full scope of changes in this release, please check the full Changelog below:

Full changelog:

= Version 2.0.0 =
Important:
  Slack integration has been migrated from a Webhook to a Slack App. Your Slack notifications will not work without reconfiguration.
  Manual migration and reconfiguration is required, please see more in the "Migration warnings" section.

Features:
  New "Zones" feature for support of remote networks - includes a new "Unimus Core" that serves as the remote proxy / remote agent
  Configuration Change Notifications with full diffs are now supported over Slack (if Slack Notification Sender enabled)
  Failed discovery logs now show full details of discovery and why it failed (Dashboard > Latest Failed Jobs)
  Added visual indicator (grey/green/red) of last job result to the Devices table
  "Devices > Info" window completely reworked, now shows much more useful information about the device
  Added Last Backup Date to device info window ("Devices > Info")
  Added REST endpoint to upload backup (Push backup into Unimus)
  Added support for binary backups (currently only possible with API Backup Push)
  Added an "Export backups" button to the "Backups" view - allows to export all or only latest backups for all devices
  Added support for specifying a CRON expression for Schedules (in addition to current options)
  Changed pagination on the "Config Search" view to 500 (up from 10)
  NetXMS client API updated to version 3.1 (NMS Sync)
  Zabbix importer will now import nodes with only Agent-type interfaces
  Added a new help link on the Backups view, "How does Unimus store backups?"
  Added a new Backup Retention Policy - "Number of backups" (will only keep last x backup for device)
  Added a new "Send Diff" and "Send Backup" popup that replaces the old email input form
  The "Send Diff" and "Send Backup" features now also supports sending diffs over Slack
  New global notification options to control where the system FQDN is displayed in notifications (title or body)
  Added system FQDN to notifications which were missing it (all notifications now contain system FQDN)
  Improved system FQDN lookup for notifications on Windows
  Improved message formatting in all Email and Slack notifications
  Improved UX in all sections of the Notification view ("Save" buttons now only active on change, added "Discard" button, etc.)
  Added retention cleanup jobs to the "Show scheduled tasks" window
  Added new "http.proxyType" and "https.proxyType" settings to configure proxy type when running Unimus behind a HTTP(S) proxy
  Improved responsiveness in multiple views in Mass Config Push
  Added a new Easter Egg (hint: "mike", also, Hi Mike!)
  Improved handling of CLI mode changes, many previously unhandled edge-cases now work properly
  Added support for empty password (just press enter) CLI mode changes (enable, configure)
  Improved detection of "Press any key to continue" and "Press enter to continue" prompts
  Added support for "Do you accept this statement [yes/no]" prompts during login
  Added support for shortened prompts on Cisco IOS in Configure mode
  Added support for line-break prompts in Cisco IOS when using tclsh
  Improved support for Cisco ASA Thread Defense and Cisco FirePOWER TDM
  Added support for read-only user accounts on ExtremeOS
  Improved support for Enhanced Security Mode on HP/HPE ProCurve/Provision/ArubaOS
  Added output of "show bof" to TiMOS backup
  Improved support for ArubaOS Wireless Controllers in various edge-cases
  Improved banner detection during CLI login process

  "Zones" feature for support of remote networks and distributed polling
    - you can create as many Zones as required, each zone signifying a unique network
    - new top level "Zones" view for Zone management
    - Zones can be polled directly from Unimus, or using the new Unimus Core serving as the remote proxy / remote agent for the Zone
    - architectural overview: https://wiki.unimus.net/display/UNPUB/Architecture+overview
    - more info about Zones: https://wiki.unimus.net/display/UNPUB/Zones

  "Debug Mode" options moved to the "Zones" menu
    - Unimus allows debugging remote cores directly from the Unimus UI
    - you can also download logs from Remote Cores directly in Unimus
    - this requires setting debug options per-zone, so Debug Mode moved to "Zones"

  Mass Config Pushes can now be scheduled
    - You can now schedule Config Push jobs for more automation power
    - More details on Push result notifications and Push result history below

  Other Mass Config Push improvements:
    - Added "Config Push History" table to the Dashboard
    - Added new "Config Push Result" notifications (enabled by default)
    - Push job status is displayed for each Push preset in Mass Config Push Home view
    - Improved the responsiveness (UI scaling) of the Config Push view

  PRTG importer was added to the "NMS Sync" view
    - uses PRTG API to sync devices from PRTG to Unimus
    - sync possible based on Tags, or by node hierarchy in the device tree

  Observium importer was added to the "NMS Sync" view
    - uses Observium API to sync devices from Observium to Unimus
    - sync only specific devices from Groups, or all devices in Observium

  Updated dynamic (runtime) data filtering from backups in diffs:
    - improved filtering of dynamic (runtime) data from backups in all diff views
    - whenever possible, filtering will no longer make a backup invalid by changing it's syntax
    - this only influences diffs - in Unimus and in Config Change notifications
      (View, Download and Send Backup features were always sending raw, unfiltered backups)
    - See more info below in the "Migration warnings" section

  Network Scan improvements:
    - Added "Network Scan History" table to the Dashboard
    - Added new "Network Scan Result" notifications (disabled by default)

  Added support for:
    - ArubaOS-CX devices (Aruba / HPE 8320)
    - more variants of AudioCode devices
    - Blonder Tongue CMTS
    - Casa CMTS
    - Cisco ASA TD
    - Cisco IE (industrial ethernet) switches
    - CTS switches (FOS-3128)
    - more variants of Dell PowerConnect switches
    - Draytek Vigor (Discovery and Config Push only, Backup not supported)
    - Exinda devices
    - Fortinet FortiAnalyzer
    - Fortinet FortiOS v6
    - Harmonics CMTS
    - HPE StoreFabric devices
    - HPE VirtualConnect
    - Huawei Eudemon
    - Huawei VRP in HRP mode
    - Huawei VRP multi-context
    - LANCOM switches (Discovery and Config Push only, Backup not supported)
    - more variants of Mellanox switches
    - Moxa switches
    - Omnitron RuggedNet switches
    - Ubiquiti AirOS CS (custom script) firmwares
    - Ubiquiti UFiber OLT
    - Zhone MXK

Fixes:
  Fixed discovery not running for undiscovered devices when credential was added and discovery should run according to system settings
  Fixed the password of a High Security credential being visible in "Device -> Show Info -> Show credentials"
  Fixed Mass Config Push not working when it contained Un-managed or Undiscovered devices
  Fixed Mass Config Push not working when it contained devices with all connectors disabled
  Fixed Config Search showing only first 500 backups that matched the search
  Fixed Config Search "Expand all" not working
  Fixed wrong (empty) config change notifications on Calix OccamOS based devices
  Fixed device selection selecting devices randomly if they were imported from "Address Importer" or ".csv File Importer"
  Fixed Zabbix importer not importing nodes which only had Agent-type interfaces
  Fixed .csv importer sometimes importing the file header even when "Ignore header" was enabled
  Fixed wrong config change notification for Cisco WLC caused by CDP peer changes
  Fixed wrong config change notification for FortiOS caused by dynamic certificate key output
  Fixed Mass Config Push status showing "Scan Status" instead of "Push Status"
  Fixed multiple extremely rare bugs where Config Search did not show some backups that matched (normally this would never happen)
  Fixed scheduling services to run on Schedule deletion, even if no Push or Scan presets were scheduled (did not schedule jobs, just enabled service)
  Fixed a very rare bug that could cause backup failing with very short backups
  Fixed change of backup retention only being applied after service restart
  Fixed not being able to delete schedules in the Deployment Wizard
  Fixed multiple UI inconsistencies and UX pain-points
  Fixed multiple rare edge-case failures when switching CLI modes (enable, configure)
  Fixed ExtremeOS devices not working when used with read-only accounts
  Fixed FS S3900 switching being discovered as Allied Telesis
  Fixed some HP 1910 models not being discovered
  Fixed Network Scan very slow when DNS requests were timing out
  Fixed Network Scan subnets import incorrectly accepting invalid some subnets as valid
  Fixed DNS timeout configuration being ignored
  Fixed some Mellanox switch models not being discovered
  Fixed backup not working on specific TelcoSys T-Marc firmwares
  Fixed backup not working on a few specific Brocade devices
  Fixed very rare login failure on devices with extremely slow data output during login
  Fixed Config Push that required Configure mode not working on some Fiberstore switches
  Fixed Patton/Inalp devices not working (discovery/backup/push) in certain cases
  Fixed ArubaOS Wireless Controllers not working in very rare edge-cases
  Fixed extremely rare login failure on devices with a post-login menu
  Fixed multiple other extremely rare login failures in various edge-cases
  Fixed backup failing on Adtran Total Access with extremely long configurations
  Fixed discovery failing on newer AudioCodes Mediant devices / firmwares
  Fixed extremely rare cases where VT100 control sequences were not properly stripped from backups

  Fixed a bug that could cause ~1% of scheduled backups to fail on slow, or heavily loaded devices
    - each scheduled backup, a small random subset of devices would fail their scheduled backups
    - slow (older) devices, devices under sufficient load to slow down the control plane, or devices with slower external AAA were most affected
    - in the long run, all devices would be properly backed up, as the subset was usually different for each scheduled run
    - running backups manually would work, only scheduled backups were affected

Security fixes:
  Fixed issue that caused imports from HTTPS URLs in "NMS Sync" to not check HTTPS certificates even if "Do not check HTTPS certificates" was not checked
  Fixed users being able to change "Other settings > Sensitive data stripping" even for Tags they didn't have access to
  Fixed users being able to change "Other settings > Per-Tag connectors" even for Tags they didn't have access to

Embedded Core version:
  2.0.0

Migration warnings:
  Slack integration has been migrated from a Webhook to a Slack App. This is due to the addition of sending Configuration
  Change Notifications over Slack. The Webhook API did not support sending Snippets, which Config Notifications require.
  You will need to setup a new Slack App for Unimus, and reconfigure Unimus Slack sender in "Notifications > Slack".

  For some devices, there may be a single config-change notification after the first backup following the 2.0.0 upgrade.
  This will show a change occurred inside a comment or a non-config line. This is expected due to changes to the dynamic
  (runtime) backup content filtering mentioned in the "Features" section. This is caused by changes to what Unimus
  considers as dynamic (runtime) data inside backups, and you can safely ignore this change notification.

Known issues:
  ISSUE: under rare circumstances, when a Unimus Core disconnects due to packet loss, some jobs may become stuck in Unimus
  WORKAROUND: restarting Unimus is necessary
  STATUS: fixing already in progress - fix coming in 2.0.1

  ISSUE: when you delete the "Device Output Log" file in "Debug mode", any jobs that started before deletion, but finish after
         deletion will recreate the file and write their output to the file
  WORKAROUND: delete "Device Output Log" after all jobs finish / no jobs are running
  STATUS: issue scheduled for fix in 2.0.1

  ISSUE: session timeout doesn't work in certain situations when browser tab is not closed - user's web session can remain logged-in forever
  WORKAROUND: close all tabs in which Unimus is opened, or log-out manually
  STATUS: issue scheduled for fix in 2.0.1

  ISSUE: Importing is possible even with accounts that don't have access to the Default Zone due to Tag-based access restrictions
  WORKAROUND: none, account can be made read-only
  STATUS: issue scheduled for fix in 2.0.1

  ISSUE: Unable to export all backups when two zones have devices with same addresses
  WORKAROUND: none
  STATUS: issue scheduled for fix in 2.0.1

  ISSUE: with higher latency, when writing text into an input box, a desync may occur that causes a character to get lost,
         and the cursor to jump to the start of the input box
  WORKAROUND: none
  STATUS: we are investigating

  ISSUE: unable to set connection timeout in Core - this doesn't influence Core functionality in any way
  WORKAROUND: none
  STATUS: currently no ETA, framework limitations

  ISSUE: special characters can be replaced by '?' under specific circumstances
  WORKAROUND: none
  STATUS: currently no ETA, framework limitations
                    
]]>
<![CDATA[ Slack App configuration for Unimus ]]> https://blog.unimus.net/slack-app-configuration-for-unimus/ 6155a969ed7c6b0001874b7a Fri, 14 Feb 2020 16:24:00 +0000 In Unimus 2.0.0 we migrated our Slack integration to a full-featured Slack App. This was necessary since we added full Config Change notifications over Slack in 2.0.0.

Here is an example of a Slack Config Change notification:

Slack diff

This article shows how to configure a Slack App for usage with Unimus.

1) Open https://api.slack.com/apps and click "Create App" to start:

Slack create app

2) Give your new app a name, select your Slack Workspace, click "Create App":

Slack create app

3) At the bottom of the "Basic Information" section, you can add a name and a logo that your Slack Bot will use in your Workspace:

Slack Bot display information

4) Go to "OAuth & Permissions" in the left menu, and navigate to "Scopes > Bot Token Scopes":

Slack OAuth permissions
Slack Bot token scopes


5) Add the "chat:write" and "files:write" scopes to the "Bot Token Scopes":

Add Slack Bot scopes


6) After scopes are added, click "Install App to Workspace" at the top of the page:

Slack install app

7) You will get an "Bot User OAuth Access Token", copy the token:

Slack OAuth access token

8) In your Slack Workspace, invite the Unimus Bot user to the channel it will post to:

Slack Bot channel invite
Add Slack channel

9) Now navigate to the "Notifications > Slack settings" in your Unimus server:

Slack settings

10) Paste in your "Bot User OAuth Access Token" and your channel. You can use "#channel" to post in a channel, or "@user" to DM messages to a person:

Slack configured

Don't forget to "Save" your settings after pasting them in. You can use the "Run test" feature to check if your notifications are working.

If the test works, the Unimus Slack Sender is fully configured. You will now receive Config Change Notifications into Slack as soon as Unimus detects a config change in your network.

]]>
<![CDATA[ Release Overview - Unimus 1.10.0 ]]> https://blog.unimus.net/release-overview-unimus-1-10-0/ 6155a969ed7c6b0001874b79 Wed, 17 Apr 2019 16:21:00 +0000 This release overview showcases the new features and improvements in the Unimus 1.10.0 release.

With each new release, we also upload a release overview video, so if you prefer a video format, you can find it here: Youtube - 1.10.0 Release Overview video

For those who prefer readable content, read on!


Device Tags Dark Theme

Dark Theme

The first new feature of this release is a Dark Theme. We have added a dark theme for both Unimus and our Customer Portal, which should make working with Unimus easier in dark environments. You can switch themes on the Dashboard.


Unimus Config Diff

Diff word-level change highlighting

The biggest change to diffs is that they now display changes down to the word level. Previously Diffs would show which line in the config changed, but now you can see which words inside the line changed - these will be additionally highlighted within the line.


Unimus Config Diff

Diff usability improvements

Diffs now also by default ignore dynamic backup content (such as timestamps, changing hashes, and other dynamic text) both in Unimus and in config change notifications emails. We have also added a "Send diff" button when you have a diff displayed and want to send it to someone.


Config Search Filters

Config search filters

We have added filters to Config Search to be able to search only devices tagged with a certain tag. We have also added time-based filters, and you can even search the entire history of device configurations for anything. This makes it useful when you, for example, want to find out when particular configuration items were added to a device.


Unimus Devices List

Performance improvements

We have massively improved diff performance, with very large diffs rendering about 10 times faster than before. We have also massively improved the performance of Config Search. For example, a large inverse search which took 2 minutes before will now take about 15 seconds. We also worked on Importers in the performance improvement area. An import of 10.000 devices previously took about 1.5 minutes, but will now complete in under 20 seconds.


In addition to these major features, we have also added support for 10 new devices from various vendors, as well as 7 other minor features. This release also contains fixes for 16 various issues and bugs reported since the 1.9-branch release.

We recommend checking out the full changelog below for more details. We highly recommend all users to upgrade to this release.

Full changelog:

= Version 1.10.0 =
Features:
  Added official support for Java 9/10/11
  Added a Dark Theme (theme switching possible on the Dashboard)
  Added fallback to 'show startup-config' for Cisco IOS when full content of 'running-config' is not available (due to privilege issues)
  Cisco ASA will now be discovered even if "show version" is not available in user-exec mode
  Do not retrieve rogue AP table as part of Cisco WLC backups (it caused config change notifications on every backup)
  When a discovered device fails rediscovery with UNABLE_TO_IDENTIFY_DEVICE error it's discovered details will be cleared
  Emails expire from email sender queue after 1 hour if they fail to send within this period
  Improved compatibility with certain Dell N series switches
  Improved compatibility with certain Juniper JunOS devices
  Deploy Wizard table width increased (credential and schedule creation screens in the Deploy Wizard)
  New "Ignore dynamic content" option for diffs
  "Only changed lines" and "Ignore dynamic content" options now on by default for diffs
  New "Send diff" button that allows to send currently shown diff over email
  When an import is in progress, a notification will be shown on the Import and Devices screens
  Performance and UX improvements when importing large number of devices (5k+) through Basic Import

  Diffs now highlight actual word changes within change lines:
    - new diffing logic can recognized changes inside changed lines on a per-word basis
    - changed lines are still highlighted, but changes within lines will be highlighted even more
    - this features makes it even easier to see what has changed when looking at a diff

  New features for Config Search:
    - added ability to search by Device Tag (search filtering by Tag)
    - added Config Search in a time-range (search filter by date / time)
    - added full historic Config Search option (search in all config history)
    - added option to specify context size (lines before and after match)

  Performance improvements for Config Search and Diff:
    - complete rewrite of rendering logic, performance of rendering improved by 10x
    - Diff rendering performance improvements, cosmetic/color changes
    - for inverse Config Search, do not render config until stack panel is opened
    - added paging for very big config search results (this is to avoid browser limitations)

  Added support for:
    - 3Com 29XX switch series
    - 3Com non-switch devices (chassis, routers, etc.)
    - Dell PowerConnect 8024 (and similar PowerConnect switches)
    - Exablaze Fusion switches
    - Lenovo RackSwitch switches
    - Lenovo Flex System Fabric
    - OcNOS (switches running OcNOS)
    - FireBrick devices
    - Ruckus Wireless Bridges
    - Turris OS devices (Turris Omnia, etc.)
    - InfiNet WANFleX devices
    - SonicWall devices (SonicOS)
    - Telco Systems T-Marc devices

Fixes:
  Fixed a deadlock when using Network Scan with HSQL DB
  Fixed expiring session in other tabs when a new tab was open
  Fixed backup 'Download' returning a wrong backup in certain cases
  Fixed address validation not accepting a device if it contained certain special characters in its FQDN
  Fixed Mass Config Push merging first line of output with the command line (missing newline)
  Fixed Import / NMS sync considering failed imports / syncs as successful in certain cases
  Fixed Import / NMS sync not sending failure notifications on failed imports / syncs in certain cases
  Fixed newer versions of Fiberstore switches not being discovered
  Fixed incorrect configuration change notifications on newer versions of IgniteNet MetroLinq
  Fixed rare incorrect configuration change notifications for MikroTik RouterOS
  Fixed incorrect configuration change notifications on F5 BIG-IP and F5 BIG-IQ
  Fixed incorrect configuration change notifications on Cisco WLC
  Yet another round of fixes for more incorrect configuration change notifications on FortiOS
  Fixed discovery on certain HP 1920S switches
  Fixed error when searching in bound / not bound device tables in initial config push binding
  Fixed Powercode importer not being rescheduled when default schedule changed
  Fixed Cisco IOS driver wrongly discovering CDB series switches as IOS routers
  Fixed incorrectly identifying some Cisco Catalyst switches as IOS routers
  Fixed config search for HSQL and PostgreSQL databases

  Fixed multiple cases where a Read-Only account could modify objects:
    - read-only account was able to re-run Mass Config Push on an output group through the re-run menu
    - read-only account was able to clone or delete a Mass Config Push preset through the right-corner menu
    - read-only account was able to clone or delete a Network Scan preset through the right-corner menu

Tickets closed by this release:
  UN-126, UN-191, UN-336, UN-348, UN-350, UN-361, UN-385, UN-392, UN-406

Known issues:
  Special characters can be replaced by '?' under specific circumstances
                    
]]>
<![CDATA[ Release Overview - Unimus 1.9.0 ]]> https://blog.unimus.net/release-overview-unimus-1-9-0/ 6155a969ed7c6b0001874b78 Wed, 09 Jan 2019 16:16:00 +0000 Unimus 1.9.0 is the biggest release of Unimus to date. This release overview showcases the new features and improvements present in the 1.9.0 release.

With each new release, we also upload a release overview video, so if you prefer a video format, you can find it here: Youtube - 1.9.0 Release Overview video

For those who prefer readable content, read on!


Device Tags

Device Tags - friendlier and available in more places

We improved the usability of Device Tags and made them more available across Unimus. Device Tags now have their own top-level menu, and managing them is more user-friendly. We have also added a new "Tags" window on the Devices screen, so you can tag devices directly from the Devices screen.


Device Access Limitation

Device Access Limitation - usability improvements

Since we were working on improvements to Device Tags we also improved the Device Access Limitation system. The Device Access table in User management has been simplified and made more intuitive. In the Devices screen, we added a new table that shows which accounts have access to the selected device, and what grants user the access. You can find this under the new "Tags" window.


Device Edit

Device Ownership - new feature to simplify access management

We have also added Device Ownership - when a user creates a device, he is the owner of that device. Owners have access to their devices even if limited by Tags, or other access-limitation. This fixes an issue where an access-limited Operator-level user was not able to see the devices they themselves created. You can of course change the owner of any device, or set the owner for a device to "None" in the device "Edit" window.


New Credentials Creation

New "High Security Mode" for credentials and enable/configure passwords

When creating a new credential, you can now create it in "High security mode". This will make the password for this credential completely private, and disables any "Show Password" features for that credential across Unimus. The password will be un-retrievable even to administrator-level accounts, making it easier to comply with security requirements in strict environments.


Device Comments

"Comments" windows - real-time updates

The "Comments" windows now have live-update, so when someone comments all other open Comments windows will refresh automatically. When a new comment is added to any entity in the system, all other users also immediately see the comment icon updated as well. This makes the commenting feature in Unimus update in real-time for all users, and improves UX dramatically.


In addition to these major features, we have also added support for 9 new devices from various vendors, as well as 9 other minor features. There are also new API endpoints present for config change and diff retrieval over the API.
               
We recommend checking out the full changelog below for more details.
               
This release also contains fixes for more than 20 various issues and bugs reported since the 1.8-branch release. We have worked very hard to eliminate all issues small and large, and fixed many edge-cases where device interaction would fail. We are happy to report that from beta and RC testing, the 1.9.0 release is showing to be the most stable release of Unimus yet.
               
We highly recommend all users to upgrade to this release.

Full changelog:

= Version 1.9.0 =
Features:
  Juniper JunOS driver completely rewritten, solving multiple issues with JunOS
  Added hints (contextual help) to "Do not manage device" and "High security mode" checkboxes
  Improved compatibility with certain Cisco MDS models
  Backups for Cisco IOS and NXOS will now complete even if the "show vlan brief" command is not available
  Increased subnets text area limit in the Network Scan to 65k characters
  Improved support for devices with certain versions of Comware / Huawei VRP
  Improved support for certain models of Cisco SMB switches
  Mass Config Push now supports devices which present a menu after login (pfSense, ProCurve stacks, etc.)
  Improved support for devices that display "Press any key to continue" but really ignore that and proceed to prompt
  Improved visuals for all input fields to better handle IPv6 addresses or long FQDNs
  Event system (top right corner popups) enhanced and visually unified
  Added support for "enhanced security-mode" on HP ProVision / HPE ProCurve / Aruba ArubaOS
  Improved support for "(y/n)" prompts

  Improved the usability of Device Tags:
    - Device Tags now have their own top-level menu for tag management
    - improved the tag management (add/remove/assign/unassign) process
    - various other UI and UX improvements related to tags

  Improved the Device Access Limitation system:
    - Added a new "Users with access" window to devices which have Tags
      (will show which users have access to this device, and where the access comes from)
    - Added device ownership - account which creates a device is now the owner of that device
    - owners always see the devices they own, even if limited Access Limited by Tags
      (solves Operator-level users not seeing the devices they create when they are access-limited)
    - Accounts are not able to changed their own access role anymore - you can't cut yourself from Admin access anymore

  Update to the Comments windows across Unimus:
    - Comments now have live updates, if one user adds a new comment, all users will immediately see it
    - multiple graphical and visual updates and fixes to the Comments windows

  Added two new API endpoints to retrieve configuration changes
    - you can now retrieve list of devices with config changes over the API
    - external integrations can now use the API to display diffs generated by Unimus
    - more details at: https://wiki.unimus.net/display/UNPUB/Full+API+v.2+documentation

  Added high security mode option for credentials and enable passwords
    - will disable "Show password" for these credentials anywhere in Unimus
    - useful to make sure no one (no matter their access level) can retrieve password from Unimus

  New "Sensitive data striping" feature:
    - will not store any sensitive data in backups (passwords, keys, etc.)
    - can be configured globally, or per-Tag
    - currently supported on Cisco ASA, IOS, IOS-XR, Nexus, NXOS
    - more details at: https://wiki.unimus.net/display/UNPUB/Sensitive+data+striping

  The CLI login handler has been rewritten from scratch:
    - this improves overall compatibility with all devices that we support
    - many edge-cases where login to device would fail were solved

  Added support for:
    - A10 Networks Thunder series
    - Adtran TA (Total Access) 2nd gen.
    - Cisco SMB SF2xx / SG2xx / SX2xx switch series
    - more H3C Comware switches
    - HPE 1920S
    - Nomadix AG
    - more Zyxel MGS switches (37xx and newer)
    - more Zyxel USG models
    - Zyxel XGS switches
    - more Zyxel ZyWALL models

Fixes:
  Fixed a bug casing some login banner to cause Unimus to fail the login to devices
  Solved access-limited users not seeing the devices they create when they are Operator level (see device ownership in Features section)
  Fixed users with Device Access limitations seeing devices they should not see in Mass Config Push device binding
  Fixed UBNT devices not discovering / backing-up if firmware version contained 4 digits
  Fixed some environment configuration being ignored on startup (logging, proxy, etc.)
  Fixed a rare failure when trying to switch to enable / configure mode during discovery
  Fixed enable / configure mode switch failing on devices which responded with a lot of data immediately during mode switching
  Fixed Mass Config Push incorrectly reporting "unsupported command" in very rare cases
  Fixed multiple issues with Juniper JunOS backups (parts of backups missing, incorrect change notifications, etc.)
  Fixed not discovering / backing-up some models of UBNT AirFibers
  Fixed some versions of ZyXel USGs not being discovered
  Fixed some versions of Comware / Huawei VRP devices not being discovered
  Fixed some models of Cisco SMB switches not being discovered
  Fixed "--More--" not being properly removed from FortiOS backups in rare circumstances
  Another round of fixes for more incorrect configuration change notifications for FortiOS
  Fixed table search not behaving as expected with certain special characters
  Fixed incorrect username format tooltip message on the login screen
  Fixed inconsistent width of subnets area in the NetworkScanView when changing browser window size
  Fixed Unimus switching view when licensing server went offline and then back online
  Fixed device output logging not working after enabling/disabling it multiple times
  Fixed a UI error when trying to backup multiple unmanaged devices
  Fixed a UI error when a 'read only' account navigated to the Notifications view
  Fixed a UI error when 'Expand command(s) windows' clicked when creating new mass config push preset
  Fixed various cases where login would fail to devices that display "Press any key to continue" but
    really ignore that and proceed to prompt
  Fixed some models of Zyxel MGS switches not being discovered

Tickets closed by this release:
  UN-245, UN-272, UN-316, UN-334, UN-362, UN-365, UN-371, UN-372, UN-377, UN-378, UN-379, UN-380, UN-382

Known issues:
  Special characters can be replaced by '?' under specific circumstances
                    
]]>
<![CDATA[ Validating the security of your MikroTik routers network-wide ]]> https://blog.unimus.net/validating-the-security-of-your-mikrotik-routers-network-wide/ 6155a969ed7c6b0001874b77 Mon, 06 Aug 2018 16:06:00 +0000 Introduction

Recently, there has been a resurgence of attacks on MikroTik RouterOS devices (articles here, here and here) using vulnerabilities that were fixed in April 2018 (release 6.42.1), but also falling back to some older vulnerabilities.

These new attacks primarily use an exploit in Winbox (one of MikroTiks management interfaces), to gain control of the router, and perform various malicious tasks. These Winbox exploit are far from the only exploits that exist for RouterOS however. For example, the http(s) server in older versions of RouterOS contains an exploit that was made public during the Vault7 leaks.

In this article, we will use Unimus to check if any of your routers are compromised across your whole network. We will also look into how to use Unimus to both audit and fix potential security holes for old and new MikroTik exploits alike.

Preparations

For this article, we will assume you have the devices you want to audit / secure already in Unimus, and they are properly discovered. If this is not the case, we suggest checking out our Wiki, and/or video tutorials.

We will be creating various Mass Config Push presets, but if you wish you can run all of these steps in sequence inside a single Mass Config Push. Using multiple presets just makes it easier and more organized to audit your network.

Each of the following sections contains a single security check / fix. When you run these commands, you might see multiple Mass Config Push output groups. We recommend checking each group for corresponding output.

For example, when running the "Checking if routers are exploited" preset, you might see 2 output groups. One output group will contain 45 devices, and the other 2 devices. In this example, when you inspect both output groups, you should find that 45 devices are exploit-free, and the other 2 have been infected.

Unimus MikroTik exploit check

Checking if routers are exploited

This preset will check if any of your devices have been exploited using any of the latest RouterOS exploits.

:if ([/ip socks get enabled]) do={
  :put "Socks is enabled, if you didn't do this manually, this device has been breached!"
}
:if ([:len [/file find name~".*[Mm]ikrotik\\.php.*"]] >= 1) do={
  :put "\"mikrotik.php\" file found on the file system, high chance this device has been breached!"
}
:if ([:len [/system script find source~".*[Mm]ikrotik\\.php.*"]] >= 1) do={
  :put "A script containing \"mikrotik.php\" found, high chance this device has been breached!"
}
:if ([:len [/system scheduler find on-event~".*[Mm]ikrotik\\.php.*"]] >= 1) do={
  :put "A scheduled script containing \"mikrotik.php\" found, high chance this device has been breached!"
}
:if ([:len [/user find name~".*service.*"]] >= 1) do={
  :put "\"service\" user exists, if you didn't create this user manually, this device has been breached!"
}

If no devices in your network have been breached, you will see only a single output group, with empty output. If you see any other output groups, devices in those output groups have been compromised. The content of the output group will tell you which of the exploit checks matched on those devices.

How to remediate if a device has been exploited                            
Simply create a Mass Config Push preset like this:

/ip socks
set enabled=no
set port=1080

/file
:foreach i in=[find name~".*[Mm]ikrotik\\.php.*"] do={
  remove $i
}

/system script
:foreach i in=[find source~".*[Mm]ikrotik\\.php.*"] do={
  remove $i
}

/system scheduler
:foreach i in=[find on-event~".*[Mm]ikrotik\\.php.*"] do={
  remove $i
}

/user
:foreach i in=[find name~".*service.*"] do={
  remove $i
}

Run this on the affected devices.

Make sure to change the password (preferably also username) you use to access the affected devices AFTER you remediate the exploits.

Checking if your routers have firewall

Having a properly firewalled input chain on your MikroTiks is super important. This preset checks if your firewall exists, and has an explicit drop rule in the input chain.

/ip firewall filter
:if ([:len [find]] = 0) do={
  :put "No firewall configured on this device"
} else={
  :if ([:len [find chain=input]] = 0) do={
    :put "No input firewall configured on this device"
  } else={
    :if ([:len [find chain=input action=drop !connection-state]] = 0) do={
      :put "No explicit drop rule in input firewall configured on this device"
    }
  }
}

How to remediate            
There is sadly no simple response to this one. You will have to build a proper firewall for your needs. (if there is enough interest, we might write a separate article on this one)

Checking service ACLs (address restrictions)

If for some reason you do not want to have firewall on your routers (this should never be the case - you can use "/ip firewall raw" if you want to fasttrack some traffic), you must secure the services on RouterOS to only certain IPs (or IP ranges).

If you DO have firewall properly configured, it will already protect these services, so with a proper firewall setup, limiting access to services is not essential. You can however use this as a 2nd line of defence.

/ip service
:foreach i in=[find] do={
  :if ((![get $i disabled]) && ([get $i address] = "")) do={
    :put ([get $i name] . " service is not restricted to any address!")
  }
}

How to remediate                            
You can push the following change to your devices:

{
:local address "1.2.3.4"

/ip service
:foreach i in=[find] do={
  set $i address=$address
}
}

Make sure to properly change the address variable in the 1st line.

Update RouterOS to latest the version

All of the exploits currently running wild use vulnerabilities that have been fixed in the latest RouterOS versions. Even after improving security using the above methods, it is highly recommended to upgrade to the latest RouterOS release.

We have a separate blog article on how to update RouterOS to the latest version across your entire network here: https://blog.unimus.net/network-wide-mikrotik-routeros-upgrade-with-unimus/

Final words

That's it for this blog article, we hope it will help you at least a little with your network security. If you are new to Unimus, check out our website to learn more about us!

We are offering an unlimited trial license if you want to give Unimus a try (independently of our free tier)! Click here to learn more!

]]>
<![CDATA[ Unimus 1.7.0 and Network Automation ]]> https://blog.unimus.net/unimus-1-7-0-and-network-automation/ 6155a969ed7c6b0001874b76 Thu, 10 May 2018 14:27:00 +0000 Last week we released Unimus 1.7.0. The biggest part of this release was our new Network Automation / Mass Config Push feature.

This release marks a huge milestone for Unimus, and in this article we would like to tell you more about it!

Before we go any further, here is our release overview video:

This article will mostly focus on Network Automation, since this is the biggest and the most interesting feature in this release.

How to make Network Automation easy?

Ever since we started developing Unimus, our main goal always was "simple and fast to use, but powerful enough to solve hard problems".

We approached Network Automation with the same mind-set:

  • you should not need to know an enormous technology stack simply to automate deployment of a VLAN across 50 switches
  • new networkers should not have to spend weeks learning to understand and use a network automation platform
  • it should just work, no matter the type of the device, or its vendor

Before we go any further, let's state one basic premise - Unimus will not turn your network into IaC (infrastructure as code). This is not our goal, nor our aim, and if you want your network to be IaC managed, the automation features in Unimus are likely not what your are looking for.

So who are the Network Automation features in Unimus aimed at?

Every single network administrator who wants to save time and automate, without having to learn programming or entire automation frameworks. Automate without needing to learn new languages, change work-flows, and without having to worry if their vendor supports it.

With the Network Automation features in Unimus, you can use the CLI commands you already know for your equipment. You can automate configuration for all of the 249+ vendors we support in Unimus. And most importantly - everything is easy to understand and fast to use.

Historically, the barriers of entry into network automation have been huge. With this release, we are hoping to remove the large learning curve associate with network automation.

We are hoping to give the power to every network administrator to mass-deploy configuration across their network, and to enable network scaling by making automation easy and painless.

For an example on how to automate MikroTik RouterOS deployments network-wide, you can check our previous article: https://blog.unimus.net/network-wide-mikrotik-routeros-upgrade-with-unimus/

Unimus 1.7.0 changelog

Finally, the full changelog of 1.7.0:

= Version 1.7.0 =
Features:
  Improved configuration change detection on Cisco IOS and Cisco NXOS
  Unimus now detects when devices return "permission denied" / "access denied" errors and fails the backup job
  Improved error reporting in Dashboard "Show log" for all jobs
  Improved logging of various errors and failed jobs in the log file
  Improved Enable/Configure mode switching for all supported vendors
  Added detection of command unsupported and permission denied errors in output of devices that do not use paging
  Added support for devices which require entering login password twice
  Added device description to job logs on the Dashboard

  Added a new "Mass config push" / "Mass reconfig" feature:
    - Unimus is now able to push configuration to your devices
    - you can create as many "push presets" as needed to automate your network
    - devices will be switched to Enable or Configure mode automatically if a push preset requires it
    - output from the push job is grouped, no need to check output of each device manually

  Added support for Enable/Configure passwords separate from Credentials (username/password combinations):
    - you can specify a list of Enable/Configure passwords on the Credentials screen
    - Unimus will automatically discover which Enable/Configure password is valid for a device

  Added support for Credential and Enable/Configure password binding:
    - you can bind specific Credentials or Enable/Configure passwords to devices
    - this will disable Credential and Enable/Configure password discovery on the device
    - only the bound Credential and Enable/Configure passwords will be used for the device
    - discovery, backup and any other operations on the device will fail if the bound Credentials are invalid

   Added a new "Network scan" (device discovery) feature
     - Unimus is now able to adopt devices by scanning your network
     - you can define multiple subnets for scanning, and Unimus will find available devices
     - networks scans can be scheduled to periodically adopt devices from the network

  Added support for:
    - Adtran NetVanta
    - Adtran Total Access
    - Brocade NetIron
    - HP 1950 switches
    - Ruckus Unleashed
    - ZyXel ZyWALL

Fixes:
  Fixed Brocade NetIron/FastIron/TurboIron SX/CX/GS/LS/WS not being discovered
  Fixed wrong backup contents for multi-context ASA
  Fixed not properly stripping pagination on some models of Netonix switches
  Devices with very long backup time (3+ minutes) would not be backed up, now they will be
  Fixed discovery not working when quickly removing and then re-adding the same device
  Fixed Citrix NetScaler driver not working with newer versions of NetScaler
  Fixed connections sometimes failing to slow devices
  Fixed missing scroll-bar in "View backup"
  Fixed wrong backups table columns width
  Fixed account with READ_ONLY role could access 'Adding the first device' screen
  Fixed not properly handling empty device address in Basic Importers
  Fixed license key change handling (it was not possible to change license key in certain situations)

Tickets closed by this release:
  UN-34, UN-127, UN-209, UN-232, UN-251, UN-309, UN-310, UN-311, UN-312, UN-313

Known issues:
  Special characters can be replaced by '?' under specific circumstances
]]>
<![CDATA[ Network-wide MikroTik RouterOS upgrade with Unimus ]]> https://blog.unimus.net/network-wide-mikrotik-routeros-upgrade-with-unimus/ 6155a969ed7c6b0001874b75 Fri, 27 Apr 2018 12:57:00 +0000 Let's face it - doing a network-wide roll-out of a new version of software for your switches or routers is painful and takes a LOT of time. However, it is absolutely necessary to keep the firmware/software of your networking equipment up to date.

Recently, MikroTik has had a series of severe vulnerabilities. More details can be found here and here. But MikroTik is not alone in this. Cisco also recently had a bad ASA vulnerability (info here), and just last year Ubiquiti had a massive exploit which brought down many networks around the world (more details).

This article will focus on MikroTik - we will show you how you can do a network wide mass upgrade of RouterOS using Unimus, and the RouterOS Package Source feature. What's even better, doing the entire upgrade process (including setup of Unimus and RouterOS Package Source) can be done in under an hour.

Topology of our test network

The topology for the network we will be testing on is simple:

  • we have a Package Source (which is just a normal RouterOS device - for this demo a CHR)
  • then we have 5 MikroTiks we want to update RouterOS on
  • and finally we have our Unimus system
test network topology

Configuring the RouterOS Package Source

The Package Source will be a MikroTik CHR for this demo. To make this CHR a package source for all our other MikroTiks, we first need to get the packages we are interested in. You can get packages from the MikroTik download page. For this demo, I just downloaded the latest packages for ARM and x86.

After you get the packages, you can upload them to your package source router using SCP. I created a ros-packages directory, and put them there.

MikroTik RouterOS Package Source packages

And that's actually everything you need to do on our Package Source "server".

Configuring Unimus to talk to the devices we want to upgrade

For this article, we assume that you have an empty, but fully deployed Unimus instance ready. If you don't, you can get Unimus from our downloads page, and we have guides on our Wiki here and here that will help you deploy Unimus.

First we need to make sure we have proper credentials in Unimus which we will be using to connect to our routers. You can check this in the Credentials screen. If the proper credentials are present, we need to add our devices into Unimus. We can use the address list import to make this happen.

You can go to Basic import > Address list import and just paste in the list of IPs.

adding devices to Unimus

Unimus should discover your devices, and you should see them properly in the Devices screen.

devices added in Unimus

Before we go any further, it's a good idea to make a backup of the configuration on your routers (in case the routers don't survive RouterOS upgrade for some reason). Unimus normally backs device up on a schedule (by default at 3AM every day), but since we just added our devices, lets make a manual backup. Simply select all your devices, and do Backup now.

Pushing Package Source settings to our network

Before we can perform a mass-upgrade, we need to configure our entire network to use our package source. We will need to create a config push preset in Unimus. Go to the Mass config push > Add preset screen. Give it a name and a description, and bind all of your MikroTiks to this preset (using Select devices > Not bound devices > Bind). Then save the config push preset.

Adding push targets


After you created your preset, you can open it (clicking on it in the Mass config push screen). The commands we will be pushing are these:

/system upgrade upgrade-package-source
add address=10.9.21.235 user=admin
password
Mass config push setup

You will need to adjust the address and username/password here. The address will be the address of our Package Source, and username/password to use to log into it. Now we can save and run our config push (use the Save and Run now buttons).

Running a mass config push

You should see one output group when the push finishes. If there are any errors, you can check the error output groups to see on which devices the errors occurred.

Performing a network-wide RouterOS upgrade

Now that the configuration to use our package source has been distributed to our network, we can perform a mass upgrade. Change the commands for your config push preset to:

/system upgrade
refresh
:delay 5
print
Check package source for upgrade

And now run the config push. This will cause all MikroTiks to check the package source for upgrade, and print out the available packages.
           
Please inspect the output groups of this command when running on your network.
You can have multiple output groups (due to different architectures that RouterOS supports), but you should have no errors, and all output groups should see newest RouterOS packages available.

If everything is in order, we can pull newest packages to all our routers. Change the commands for your config push preset to:

/system upgrade
download [find version=6.42.1]
Pulling new packages to routers

This will tell all our MikroTiks to pull our package from the package source. Please note that in bigger networks, this will take a while. You can always check when all of your MikroTiks are done with the commands:

/system upgrade
:put [get [find version=6.42.1] status]

Running this would give us different output groups based on if the router is already finished downloading the upgrade package, or if the download is still in progress.

Checking upgrade status

After all your routers downloaded your package, they simply need a reboot to install it. Change the commands for your config push preset to:

/system
reboot
y

And push that to the routers. They will reboot and should come back up with the latest RouterOS.

Final words

Please remember to also update RouterBOOT (the firmware / Bootloader of RouterBOARDs).

You can do this just by changing the push preset we created in this guide and pushing the appropriate commands to your RouterBOARDs.

If you are new to Unimus, check out our website to learn more about us!
           
We are offering an unlimited trial license if you want to give Unimus a try (independently of our free tier)! Click here to learn more!

]]>