Tuesday, January 6, 2026

Enumerating Processes on Windows

Enumerating processes is a common task on a computer.  Whether you are using something like Task Manager, Process Explorer, or System Informer on Windows, or 'ps' or 'top' Linux/Unix/MacOS - sometimes you just need to know what processes are running on your system.  As a developer, sometimes you need to know programmatically what processes are running.  However, enumerating the running processes on Windows is not straightforward, or more accurately, there are multiple ways to accomplish the task.  There are no fewer than 5 different ways to enumerate running processes on Windows.  But this raises questions like which should I use?  Are there benefits to different methods?  Which method is the most efficient?  In this post we will look at the different methods, discuss the differences, as well as the pros and cons.

To start off, the 5 methods of enumerating processes programmatically on Windows are...

  1. EnumProcesses Win32 API
  2. Toolhelp library
  3. WTSEnumerateProcesses WTS API
  4. Win32_Process table in WMI
  5. NtQuerySystemInformation (an undocumented API)

 

EnumProcesses Win32 API

Let's start off by looking at the EnumProcesses() API.  I suspect this is one of the oldest methods on Windows, if not the original method.  The API is simple, you provide a buffer and the API fills it with a list of PIDs for all running processes.  It is up to you to then call additional APIs to get useful info about each PID such as the process filename.

EnumProcesses() is not a very graceful API in that if the buffer you provide is too small, the API does not tell you how large it should be.  So you may end up calling the API multiple times until the buffer was larger than the amount of data it returned.

Here is a sample function demonstrating the use of the EnumProcess API. 

std::map<uint32_t, std::wstring> EnumProcessesWin32(void)
{
    std::map<uint32_t, std::wstring> mapProcesses;

    size_t nAllocCount = 1024;
    auto pBuffer = std::make_unique<DWORD[]>(nAllocCount);

    size_t nLoopCount = 0;
    DWORD dwReturnedSize;
    EnumProcesses(pBuffer.get(), (DWORD)nAllocCount * sizeof(DWORD), &dwReturnedSize);
    while(dwReturnedSize / sizeof(DWORD) == nAllocCount && nLoopCount++ < 10)
    {
        nAllocCount += 1024;    // Increase the allocation and try again
        pBuffer = std::make_unique<DWORD[]>(nAllocCount);
        EnumProcesses(pBuffer.get(), (DWORD)nAllocCount * sizeof(DWORD), &dwReturnedSize);
    }

    if(dwReturnedSize)
    {
        size_t nCount = dwReturnedSize / sizeof(DWORD);
        for(size_t n = 0; n < nCount; ++n)
            mapProcesses.emplace(pBuffer.get()[n], GetProcessFilename(pBuffer.get()[n]));
    }

    return mapProcesses;
}


Toolhelp library

The toolhelp library is another fairly old method for enumerating processes.  This library works by first calling CreateToolhelp32Snapshot() which takes a snapshot of the processes at that moment.  You then call various APIs to analyze the snapshot one record at a time.  And when you are done you close the handle which frees the resources.  This may sound complex, but the code can be fairly simple.

std::map<uint32_t, std::wstring> EnumProcessesToolhelp(void)
{
    std::map<uint32_t, std::wstring> mapProcesses;

    HANDLE hSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0);
    if(hSnapshot != INVALID_HANDLE_VALUE)
    {
        PROCESSENTRY32W pe32 = {0};
        pe32.dwSize = sizeof(pe32);

        if(Process32FirstW(hSnapshot, &pe32))
        {
            do{
                mapProcesses.emplace(pe32.th32ProcessID, pe32.szExeFile);
            }while(Process32NextW(hSnapshot, &pe32));
        }

        CloseHandle(hSnapshot);
    }

    return mapProcesses;
}


WTSEnumerateProcesses WTS API

The WTS APIs are newer, having first appeared in Windows Vista.  This API is designed for services running on multi-user systems, but the API is available on all versions of Windows.  In my opinion, the WTS API is probably the easiest and cleanest code for general use.  You call a single API which returns the results in an allocated block of memory.  After you analyze the results you call a second API to free the memory.  Here is an example function.

std::map<uint32_t, std::wstring> EnumProcessesWts(void)
{
    std::map<uint32_t, std::wstring> mapProcesses;

    PWTS_PROCESS_INFOW pwtspi;
    DWORD dwCount;
    if(WTSEnumerateProcessesW(WTS_CURRENT_SERVER_HANDLE, 0, 1, &pwtspi, &dwCount))
    {
        for(DWORD dw = 0; dw < dwCount; ++dw)
            mapProcesses.emplace(pwtspi[dw].ProcessId, pwtspi[dw].pProcessName);

        WTSFreeMemory(pwtspi);
    }

    return mapProcesses;
}


Win32_Process table in WMI

Windows Management Instrumentation is a horrible API to interact with, at least from C++.  The amount of overhead required makes this method ugly and slow.  WMI is best from scripted languages (e.g. PowerShell or VisualBasic).  WMI does have one benefit that the others lack - you can call WMI from a remote machine (assuming the firewall does not block it).  Therefore WMI is the only method to remotely enumerate processes.

The following function shows an example, but note that this example uses a helper class to handle the complexity of WMI.  So the actual code is far more complicated.

std::map<uint32_t, std::wstring> EnumProcessesWmi(void)
{
    std::map<uint32_t, std::wstring> mapProcesses;

    CWmiService wmiSvc;
    if(SUCCEEDED(wmiSvc.Open()))
    {
        CWmiClass wmiClass = wmiSvc.GetClassFormat(L"Win32_Process", L"ProcessId", L"Name");

        CWmiInstance wmiInst = wmiClass.GetFirstInstance();
        while(wmiInst.IsOpen())
        {
            uint32_t nPid = (uint32_t)wmiInst.GetAsUInteger(L"ProcessId");
            std::wstring str(wmiInst.GetAsString(L"Name"));
            mapProcesses.emplace(nPid, str);

            wmiInst = wmiClass.GetNextInstance();
        }
    }

    return mapProcesses;
}

 

NtQuerySystemInformation (an undocumented API)

The last method is the undocumented Windows API NtQuerySystemInformation.  I call the API "undocumented" but this is a little misleading.  The API is documented, Microsoft details what parameters to pass in and the structs that are returned.  But Microsoft would like to discourage the use of this API so 1) you cannot statically link to it, you must dynamically load it, 2) Microsoft says to use alternate APIs (though they don't list alternate recommended APIs), and 3) they claim the API may change at anytime in the future.  So use of this API carries some risk.  I suspect the undocumented API is the singular method to enumerate processes on Windows.  All of the other methods are wrappers around this API, providing their own set of flags and differing memory management requirements.

Because you cannot statically link to the API, the function to use this method appears more complicated than the rest.

std::map<uint32_t, std::wstring> EnumProcessesUndocumented(void)
{
    const auto SystemBasicProcessInformation = 252;
    constexpr auto STATUS_INFO_LENGTH_MISMATCH = 0xC0000004;
    using PFNNTQUERYSYSTEMINFORMATION = NTSTATUS(NTAPI*)(ULONG, PVOID, ULONG, PULONG);

    std::map<uint32_t, std::wstring> mapProcesses;

    HMODULE hNtDll = GetModuleHandleW(L"ntdll.dll");
    PFNNTQUERYSYSTEMINFORMATION pfnNtQuerySystemInformation = (PFNNTQUERYSYSTEMINFORMATION)GetProcAddress(hNtDll, "NtQuerySystemInformation");

    NTSTATUS status;
    ULONG nBufferSize = 0x80000;
    auto pProcessBuf = std::make_unique<BYTE[]>(nBufferSize);
    PSYSTEM_PROCESS_INFORMATION pspi = (PSYSTEM_PROCESS_INFORMATION)pProcessBuf.get();

    ULONG nRequired = 0;
    while((status = pfnNtQuerySystemInformation(SystemProcessInformation, pspi, nBufferSize, &nRequired)) == STATUS_INFO_LENGTH_MISMATCH && nRequired > nBufferSize)
    {    // Increase the buffer and try again
        nBufferSize = nRequired + 4096;
        pProcessBuf = std::make_unique<BYTE[]>(nBufferSize);
        pspi = (PSYSTEM_PROCESS_INFORMATION)pProcessBuf.get();
    }

    if(NT_SUCCESS(status))
    {
        while(1)
        {
            if(pspi->ImageName.Buffer)
                mapProcesses.emplace((uint32_t)pspi->UniqueProcessId, pspi->ImageName.Buffer);
            else
                mapProcesses.emplace((uint32_t)pspi->UniqueProcessId, std::wstring());    // C++ strings cannot be constructed from NULL

            if(pspi->NextEntryOffset == 0)
                break;

            pspi = (PSYSTEM_PROCESS_INFORMATION)((size_t)pspi + pspi->NextEntryOffset);
        }
    }

    return mapProcesses;
}


The undocumented method actually has a variant.  The above code uses the flag "SystemProcessInformation"  You can also call the API with the flag "SystemBasicProcessInformation" which returns less information, but does so quicker and more efficiently.


Comparing the APIs and their performance

Anytime you have multiple APIs that do the same thing, the question should be raised how do they compare performance-wise.  So I wrote test code to compare the 5 different methods.  You could call each API once using additional code to time the API call.  Or you could call each API a set number of times while timing the overall loop.  Both of these work, but I decided to go with a different method.  I created a 5 second kernel timer, and then called each API repeatedly in a loop as many times as possible.  You simply count the number of times the API was called during the 5 seconds, the more times it was able to call the API the more efficient that method is.

No surprise, WMI is the least performant of the methods at only 144 calls in 5 seconds.  That's an average of 35 milliseconds per API call.  As you'll see compared to the other methods, this is horrible.

EnumProcesses is the next worst API at 1287 calls in 5 seconds.  Almost 10x better than WMI, but still pretty bad.

The Toolhelp library did slightly better at 1664 calls in 5 seconds.

The WTS API bested them all at 2061 calls in 5 seconds.

If you are willing to use the undocumented API, then the performance increased to 3838 calls in 5 seconds.  That's almost double the performance of the WTS API.

But this is nothing compared to the undocumented API variant.  The simplified SystemBasicProcessInformation version clocked in at 102155 calls in 5 seconds.  That's 50x better than the WTS and over 700x faster than WMI.


Conclusion

I feel like the WTS method is the best method for most uses.  It's both fast and simple to use.  Both the EnumProcesses and Toolhelp are old and have been superseded.  WMI should only be considered if you need remote access.

Which leaves the undocumented API.  If you are comfortable calling an undocumented API, then it is the most performant.  With the variant being far and away the fastest method - with one big caveat.  The variant requires Windows 11 version 26100.4770 or newer.  So you would likely need to code multiple methods and fallback to supported older OSes.


You can find the full code sample on my Github page.


One final note, the code and performance numbers in this post are for demonstration purposes only.  They do not represent the maximum performance possible.  For example, the undocumented API dynamically loads the function pointer and allocates a buffer with each call.  But if you need to repeatedly call this API then it would be more efficient to perform that work outside of the loop.

 

Wednesday, December 3, 2025

Enable Bitlocker on Windows Home without a Microsoft Account

Microsoft offers 2 mainstream versions of Windows - Home and Professional.  This goes all the way back to Windows XP, and continues up to Windows 11.  For the most part the differences between the two are minor.  Most people are going to be fine with Windows Home (which costs less than Pro).

There is; however, one feature difference between the two which is critical for everyone to understand - full drive encryption.  This is a feature that encrypts your drive, thereby protesting your data even if someone steals your device.  The Pro version of Windows has "Bitlocker" which encrypts the drive.  Home has a feature called "device encryption" which is basically Bitlocker but with 1 key difference.  Bitlocker (Pro) can be enabled for both local and Microsoft Accounts, but device encryption (Home) requires a Microsoft Account.  This is so annoying, on the one hand I want encrypted drives, and on the other hand I don't want to be forced into an online account.  And I don't always have the luxury of Windows Pro (it mostly depends on what the machine shipped with, which in turn is reflected in the original cost of the machine).  Wouldn't it be great if you could have Windows Home, enable drive encryption, and still use a local account?  Well it turns out you can!!!  This guide will show you how.

The short answer is - you need to log into a Microsoft Account, but only temporarily.  Let's walk though this step by step.  If you're reading this guide then I'm going to assume you are running Windows Home.  I'm also going to assume that you have a local account (you've used one of the many methods to bypass the requirement for a Microsoft Account).  To start, open a command prompt with admin privileges (right-click and select "Run as administrator").  Enter the command "manage-bde -status"  This command will print the current status drive encryption.  In the output look for the following:

Conversion Status:    Used Space Only Encrypted
Protection Status:    Protection Off

The conversion status "used space only encrypted" means the files on the drive are encrypted (the empty space on the drive is not encrypted).  But the "protection status" is off.  How can this be?  What this means is the drive is encrypted, but the decryption keys are stored in plaintext on the local drive.  So a casual attacker could not view your files, but a skilled attacker would know how to recover the key and decrypt your files.  With Windows Pro, Microsoft offers a way to save the key to a file which removes it from the drive and ensures total security.  But with Windows Home, the only place to save the key is in the Cloud.

To solve this problem, create a new account on your computer.  Use an email address so that it is a cloud account.  And make the account into an administrator account.  Then simply log into that account once.  At this point the encryption keys are moved from the local drive to the cloud.  Go ahead and log out of the Microsoft Account and log back into your local account.

If you re-run the manage-bde command from earlier you should see: 

Conversion Status:    Used Space Only Encrypted
Protection Status:    Protection On

As you can see, your drive is now encrypted and the recovery key is no longer stored locally.  Next, open a web browser and log into your cloud account at https://account.microsoft.com/account  Under "devices" click on your computer, then view more details about the device, and lastly under Bitlocker is a link to manage recovery keys.  From here you need to copy your key info and save it in a text file.  Don't be dumb, save the file to a drive other than the encrypted drive.  Maybe an external USB drive, a NAS, a thumb drive, or a printed copy.

At this point you just need to clean everything up.  Go back into Settings on your PC and you can delete the Microsoft Account you created.  You can delete your recovery keys from the online Microsoft Account.  You could even delete your entire Microsoft Account if you so wish.  You now have Windows Home with only a local account and device encryption is fully enabled!

Saturday, June 6, 2020

Free C++ tools

I love programming in C++.  I have done some professionally for over 20 years now.  Even though C++ no longer has as much market share as it once did, now is the best time to learn C++ thanks to the availability of free and useful C++ tools.  There are so many free and useful utilities that learning and effectively coding in C++ is easier than ever.  Below is a list of my favorite free C++ utilities.

Visual Studio has long been one of the best development IDEs, but what's amazing is how many improvements Microsoft is making.  They are not content to rest on their laurels.  VS2017/2019 are truly awesome platforms for development.  And with the introduction of the Community edition, the bulk of the features are available for free to everyone.  Microsoft has offered a free version of Visual Studio going to the Express edition of the early 2000s, but that version was so crippled it was of little use.  Community is so full-featured that only the most demanding of users would need to upgrade.  Microsoft is even expanding VS beyond Windows and branching into Linux and MacOS.

CppCheck is a free static code analyzer, meaning it inspects your code files looking for common mistakes.  CppCheck is surprisingly good at what it does.  If you run it against your code for the first time, you might be surprised at the problems and suggested improvements it finds.

Clang with the Clang Power Tools extension
Although I firmly believe that Microsoft's compiler is the best option for Windows, the Clang team is making significant improvements to their compiler.  That said, Clang does have at least one cool feature - ClangTidy.  Tidy is similar to CppCheck in that it analyzes your code looking for issues.  Unfortunately, Clang has no UI so it is not easy to use.  That's where the Clang Power Tools extension comes it.  It wraps the features of Clang in an ease to use UI integrated into Visual Studio.

VerySleepy
VerySleepy is a code profiler meaning you can analyze your code looking for performance bottlenecks.  I should point out that VS2017/2019 Community comes with a built-in profiler that is easy to use.  I would recommend Microsoft's profiler included with Visual Studio.  But if for any reason you can't use the Microsoft profiler, check out VerySleepy.

I'll end this list with the latest tool I've discovered.  OpenCppCoverage is a code coverage tool, meaning when you run it the tool checks which lines of code actually executed and which did not.  This is useful when testing your code, it helps you to find code that has gone untested, which means the potential for bugs is higher.  Visual Studio does have built-in code coverage, but only for higher paying customers.  The free Community edition does not offer this feature.  Fortunately this program and corresponding Visual Studio extension do a great job.

Monday, November 18, 2019

C++ virtual destructors, a modern take

Several years ago I created a post where I looked at virtual destructors in C++.  In that post I argued that all C++ classes should have virtual destructors, unless you specifically knew what you were doing and understood the risks.  I originally wrote that before I learned the many benefits of "modem" C++ (aka C++11 and newer).  Modern C++ introduces a few new changes that really improve the language, including changes to virtual methods and class inheritance.  So I would like to revise my suggestions for class designs in C++.

First, and most importantly, it is still a bug to derive from a class that does not have a virtual destructor.  As a developer it is your job to ensure you do not derive from a class with a non-virtual destructor.  At the same time, you should not write a class without a virtual destructor that allows someone else to derive from you.  But the great news is, modern C++ has a solution to both of these problems.


Deriving from a base class without a virtual destructor:
C++ has a new keyword called "override" that tells the compiler to generate an error if the virtual function you have created does not match a virtual function in a higher class.  For example:

class CDerived : public CBase
{
public:
    CDerived();
    virtual ~CDerived() override;
};

In this example "CBase" is the base class.  If CBase has a virtual destructor then this code will compile.  But if CBase does not have a virtual destructor this code will not compile!

The keyword "override" is one of the best new features of modern C++.  Any virtual function should be marked with "override" except the virtual functions in the base class which by definition cannot derive from something else since they are in the base class.


Creating a class 
If you create a class without a virtual destructor, that's fine but someone else might derive from your class without your knowledge or permission.  This would be a bug.  If only there was a way to prevent this from happening.  Good news, in modern C++ there is a way.  Use the keyword "final" to indicate that no class may derive from you.  For example:

class CWidget final
{
public:
    CDerived();
    ~CDerived();
};

In this example I can get away without a virtual destructor because no one could ever derive from my class thanks to the "final" keyword.



Given these new keywords, I would modify my original guidelines as follows.

1.  If you derive from another class, always use "override."
2.  If you create a new class, always use a virtual destructor unless both A) your class does not derive from another class and B) your class is marked with "final."  In this situation, none of your class methods should be marked as virtual.



A few interesting side notes.  First, "override" now supersedes "virtual."  You can use both keywords as I did above, or you can just use "override" which implies "virtual."

A second, more important note, is regarding defaulted destructors.  C++ allows you to omit the destructor, or you can explicitly declare a destructor but use the "default" keyword to omit the destructors definition.  In both cases, the compiler will by default generate a non-virtual destructor.  If you want/need a virtual destructor, write the code yourself (using override) or declare as follows:

virtual ~CWidget() = default;

Related to this last point, the following code is a bug:

class CBase
{
public:
    CBase();
    virtual ~CBase();
};


class CDerived : public CBase

{
public:
    CDerived();
};

Yes the base class has a virtual destructor, and you omitted the destructor so the compiler will generate one for you.  But the generated destructor will not be virtual thus leading to bugs.  To fix this add:

    virtual ~CDerived() override = default;

Wednesday, June 6, 2018

Synology wireless router update

About 1 year ago I posted a review of the Synology RT1900AC wireless router.  To sum up, it's a good router but I have had multiple issues with it over the years.

Most of the issues were Synology working out the kinks in the router.  Anyone today who buys the RT1900AC or its bigger brother the RT2600AC would no longer experience these issues.

However, the biggest and most annoying problem I experienced was the router not connecting to the Internet.  If the connection was broken for whatever reason (like a power outage), only about 50% of the time would it reconnect.  If it failed to reconnect, the problem could only be fixed by rebooting the router, often multiple times.

Well I have good news.  I think Synology has finally fixed this issue in one of their regular router updates.  Shortly after my initial review the problem went away.  It has been almost a year now and my router has not had an issue reconnecting to the Internet.  I know 2 other people with Synology routers and they have not experienced connection issues either.  This is great news.  I can now wholeheartedly recommend either of the Synology routers to anyone looking for a better wireless router.


In other good news.  Anyone who follows technology may remember back in October of 2017 they announced KRACK, a vulnerability that affected all hardware and software that used the WPA2 protocol; including Windows, MacOS, Linux, Android, wireless routers, etc.  Synology pushed a fix for KRACK one day after the announcement.  The only company I am aware that released a fix earlier was Microsoft which had secretly pushed a fix a few weeks earlier.  But Synology got the fix out to their customers faster than most major companies include Google and Apple.  Way to go Synology.  Oh, and many Linksys and Netgear routers were never patched, so they are still vulnerable.

Monday, October 9, 2017

Upgrade to Windows 10 for free!

Good news for anyone wishing to upgrade to Windows 10 for free.  When Windows 10 was first released Microsoft offered a free upgrade for all existing Windows owners.  This upgrade was convenient in that it was offered through Windows Update, so upgrade was relatively quick and painless.  Just walk away from your computer for an hour or so and come back to find Windows 10 installed with all your data and programs still on there.

However, this free upgrade offer was a limited time only, and that offer is no longer an option.  But I have good news, anyone with a valid activated copy of Windows can still upgrade Windows 10 for free.  The key is you cannot do an in-place upgrade but instead you must perform a fresh install.  This is actually my preferred way to upgrade to a new operating system.  Here's how you do it:

1.  You need your existing Windows license key.  This license key might be printed on a sticker attached to your computer.  If not, the easiest way to get your product key is to download and run the utility ProduKey.

2.  You need to obtain the Windows 10 installation files (ISO image).  If you Google for it you can find the download and media creation links direct from Microsoft.  Be aware, you cannot upgrade to any version of Windows 10, but you need to keep it in the same edition.  So if you currently have Windows 7 Home Premium you can install Windows 10 Home.  If you have Windows 7 Professional you can install Windows 10 Professional.

3.  Make a complete backup of your current system.  Again, this is a fresh install that will erase all your data, so don't forget to back up first!

4.  Install Windows 10 onto your computer, be sure to erase the existing copy of Windows and install a fresh copy.  If you are prompted for a license key, enter your existing Windows license key.  Many newer computers the license key is saved on the motherboard itself - if so, Windows will automatically detect this license key and use it without prompting you.

5.  After installation, verify Windows 10 is activated with a "digital license."

6.  Reinstall the programs you use and restore your data from the backup.

7.  After installation, follow these suggestions to configure Windows 10 so that it's usable.


I have used this technique twice to upgrade older Windows 7 systems to Windows 10.  I have not tried Windows 8.x, but I assume it works there as well.  However, older Windows XP and Vista license keys may not work since they are technically out of service.  Also, I cannot guarantee this technique will always work.  Microsoft could stop this at anytime, so proceed with caution.

Update: I can confirm this technique still works as of June 2018.  I have also learned that it does work with "Retail" license keys but not "MSDN" license keys.  A "retail" license key is one that came with a new PC or a legit copy of Windows purchased separately.  "MSDN" license keys are keys used by developers and IT professionals.  So as long as Windows came preinstalled on your computer this technique should allow you to upgrade to Windows 10.

Friday, July 21, 2017

Synology RT1900AC wireless router review

My review of the Synology RT1900AC wireless router would best be summed up as "the wireless router I so want to recommend, but just can't because of issues."

So first a little background.  Like most people I've had several different wireless routers over the past decade.  My previous two routers (Linksys and Netgear) I replaced them not because they were broken or too slow, but because a security flaw was discovered in the router that would allow someone on the Internet to compromise my network, and the manufacturer refused to release a firmware fixing the problem.  Most router manufacturers only support their hardware for a year or two, after that they want you to buy new hardware - what a waste and what a shame.

Now Synology is a company I've used for years, they are most well known for their excellent NAS (Network Attached Storage devices).  I have had a Synology NAS for many years and what I love about them is their support.  They release regular updates for their hardware, and they support their older hardware far longer than most companies would.

So in 2015 when Synology announced they were going to release a wireless router I was very excited!  Finally a company that would support their wireless router long term.  When the Synology RT1900AC was finally released in North America in early 2016 I was a very early adopter, purchasing my unit within 1 week of release.

Unfortunately I've had a number of issues since then and had to contact their tech support on multiple different occasions.  Here's a summary of the issues I've had:

  1. I have a Raspberry Pi connected using wireless, but from time to time the connection would drop out.  The Raspberry Pi was connected using a very common "nano" wireless adapter.  After many emails with tech support on this I found a solution.  If I replaced the wireless adapter with a different one with a larger antenna, the connection issues went away.  What's frustrating about this problem, the Raspberry Pi was only about 2 feet from the RT1900AC, it should have had a strong signal.  Also, the same nano wireless adapter with my previous Netgear wireless router had no issues.  So something about the combination of this nano adapter and the RT1900AC did not work well.
  2. When connecting to my home network remotely using VPN, I could originally access machines in my network but not the RT1900AC's management interface itself.  Tech support helped me get correct firewall rules in place to allow access to the RT1900AC.
  3. When changing the firewall rules to allow access to the RT1900AC, it removed access to other machines in my network.  Neither tech support nor I was unable to find the problem, and I just gave up on VPN for about a year.  I did eventually get it working, continue reading for those details.
  4. A few months ago Synology phased our support for their old VPN server package and replaced it with a new package called "VPN Server Plus."  Since the old VPN wasn't working for me, I ditched that and installed the new one.  When I went to enable OpenVPN it gave me this weird error about installing a certificate.  Tech support had never seen that error before and had no idea what to do.  I tried a factory reset and that fixed the OpenVPN certificate error.  Now using the new VPN server I'm finally able to access both local machines in my network and the RT1900AC itself.
  5. Far and away the biggest issue I've had is not reconnecting to the Internet.  This happened to me the very first day I got the router and continues to happen to this day.  In short, if I reboot my router, there's a power outage, or my ISP drops my connection for some reason when the RT1900AC comes back up only 50% of the time will it reconnect to the Internet.  The rest of the time it won't connect, no matter how long I wait.  The router can be up for days in this unconnected state, it will never reconnect.  The only solution I've found is to reboot the router repeatedly under it does reconnent.  Sometimes I have to reboot the cable modem as well.  I've had it happen before where Internet drops and I literally reboot my router over and over for 2 hours before it finally reconnects.  Now I know this issue is Synology's, I have worked on enough networks to be able to diagnose this.  Working with tech support I rebooted my router a dozen times, and I think it reconnected 5 times and failed to connect 7 times (they analyzed logs from these attempts).  I then connected my Windows 7 computer directly to the cable modem and rebooted a dozen times.  Windows connected to the Internet all 12 times.  I've tried a factory reset, no change.  I've updated to every firmware as they release them, no change.  I purchased a new cable modem, no change.  Now I have a family member in other city with a RT1900AC and they have the exact same problem.  Synology even mailed me a replacement unit at their cost for me to try.  That unit experienced the same problem.  They have tried to diagnose this problem but cannot figure it out.  In the mean time I know of at least 2 people with this same behavior.  It's very frustrating, no one wants a wireless router that won't connect to the Internet.  What good is that?



Even though I've had issues with this router,  it's still a good router.  In fact, I would go as far to say it's better than most wireless routers.  But it's far from perfect and it definitely has not lived up to my very high expectations.  Many of the problems I've faced were fixed in software updates over the months.  But that network connection issue, if they could fix that I would wholeheartedly suggest anyone and everyone should buy this router.

I've spent most of this post talking about the problems.  But I did want to mention the good things about this router.
  • Regular software updates.  Synology publishes updates about once a month.
  • The web management UI is far and away the best web management UI.  Way better and more responsive than anything from Netgear, Linksys, etc.  I'm pretty sure they also have a phone management app, but I have not tried that.
  • The router can support multiple Internet connections (e.g. both a DSL modem and a cable modem) and can operate them in either load-balancing or failover modes.
  • Built-in support for 20 different DDNS providers, including Synology's own which works great!
  • Can connect to the Internet via mobile 3G/4G with additional hardware.
  • Very good parental controls/filtering as well as QoS and wifi priority.
  • Tons of services like VPN, SSH, FTP, SFTP, SMB, etc.
  • USB and SD card slots can function as lightweight NAS for your network.
  • VPN server and VPN client with support for common providers such as OpenVPN, SSTP, L2TP, and PPTP.
  • Additional packages such as media server to host media files for other devices on your network.

To sum up the RT1900AC router is a good piece of hardware that still has some bugs, and hopefully Synology is able to work out those bugs.  If you're a network power user and don't mind a little extra work, give it serious consideration.  If you're a regular user you might want to steer clear as you could be overwhelmed if you run into issues as I have.