A bit of work done on this today, things moved on quite a lot since my pentester days (like a decade ago) and so i had to spend some hours catching up on how public vulnerabilities are stored (metadata in particular).
NVD (the “new” CVE) is likely to be the source for published software vulnerabilities you will be able to download and check against your software definitions to know:
Is my software vulnerable to something?
A bit of what i learned today:
- CVE are pulled by NVD (nvd.nist.gov) every day or minute i’m not sure.
- NVD uses a far more elaborated JSON to store the CVE metadata and include many more things, the most relevant to me are:
- CPEs (this is the vendor, product, version (and i think language) affected. Is far more elaborated than saying “Cisco” with the purpose of being able to structure their data in a way you can make queries to the version level)
- CWE (this is a weaknesses database also sponsored by US, pretty impressive i must admit)
- NVD chaps and lads also score the vulnerability using two models, CVSS v2 and v3
So the flow seems like this:
1- You define your software (Asset Management/Etc) and their relationship. You’ll assign an existing CPE (from the NVD database pulled every day) or create a new one on eramba (which will be “local” to you)
2- eramba will pull the NVD JSON files, no clue at this point how to store them as they are very complex metadata models that wont fit in a DB straight away.
3- Using the CPE magic comparisons will be done in between what you defined and what was pulled
4- Potential matches shown for you to manage and so (that piece will be in a DB for sure)
At this point the volume and model of data is what concerns me most. Also i still dont fully understand how CPE works but i guess a bit more reading will provide clues.
If any of you had any contact with this stuff at this level of detail, much appreciated