In eramba we can tell if a control Pass or fail, if just one of its test/audits Pass or fail.
But I think that because a audit/test fail, that control can still be an effective control.
Let me explain. If an Control have 4 audits, and 3 of them passed and one failed, then the Control will be âredâ = Fail.
The problem is that we cannot see if the control is effective mitigating the risk. The 3 audits that passed could be very critical audits (key-audit) and the one that failed could be a minor important control. If it was possible to rank the audits we could say that the result of the control was âfailed but still very effectiveâ (90%-100% = Very effective)
Maybe a control have status âfailed and not effective (10%)â though 3 out of 4 audits passed, but the 3 audits were not very critical audits, and the one that failed were very critical , therefor the control is showed ânot effectiveâ
I have a similar query, not sure whether this was addressed since this was posted 6 years ago. I could not find any supporting documentation or other related posts in this forum.
My challenge is that we need to start calculating control effectiveness using a rating mechanism similar to the one shown below.
The idea is that during internal control audits, one can rate the effectiveness of the control. Now, this should ideally be reflected in the control itself (being a property of the control, not of the audit on the control). I can achieve that by including a custom drop down field showing the % ranges on the control, that would be updated as part of the control audit process.
However, how would this be linked to the risks which are being mitigated by this control? It would help, if the residual risk (ârisk treatmentâ in Eramba terminology) would be updated to reflect the status of the related control effectiveness.
Iâm wondering whether anyone else has encountered similar requirements and how they have used Eramba to facilitate such risk assessments and related measurements and monitoring.
This is not a solution, but only how we work with information security controls
We have our Information Security Maturity tested on an ongoing basis. We can Rank from 1(ad hoc) - 5(Optimized)
To score 4/5 we need to focus on an ongoing improvement of our controls.
To do so we utilize the âmaintenanceâ part of controls to describe what evidence needs to be gathered and to assess whether or not the current control actually has an actual value in relation to information security.
So instead of having a seperate sheet or rating whether or not a control should be improved we work with an ongoing imporvement process. Of course this takes time, but we also have at least one full time resource to work with compliance in relation to information security
in other words - our effectiveness is worked with all the time
We do this to ensure that we also work on discovering any holes in our security scheme.
I am curious to how the effectiveness of the control has an effect on the residual risk - as I understand the residual risk is that this is an expression of the risk after your effective controls has been performed, and if they are not performed the risk are not being mitigated?
But maybe I have misunderstood something in your usecase?
Letâs say we have a risk of unauthorised access to systems. One of the related controls may be âPeriodical user access rights reviewsâ. Now, if the control effectiveness is 100%, assuming it is being done on time, with expected coverage and that no exceptions are being found, the residual risk will be greatly reduced or even fully mitigated by this control.
If, on the other hand, one finds that the control is only 50% effective, since there are improvements to be done to the control, then the residual risk will still need to remain higher than the previous case.
So somehow, the control effectiveness needs to reflect upon the risk mitigation effectiveness and thus on the residual risk.
I would choose to handle that another way, let me try to describe my approach
Maintenance:
I would create a maintenance with the following tasks:
date 11/03-2025
Review last audit report for non-conformities and update the control
Conduct periodical User access rights review
Review must cover X,Y,Z
Exceptions must be mitigated
etc.
Audit:
I would Create the audit with the following tasks:
Date: 20/03-2025
audit that the necesaary users have had their rights reviewed
audit that the coverage is âcorrectâ
audit that the maintenance has been done within scope
etc.
Then when I the next time are handling the maintenance, one of my tasks are to review the latest audit report for the findings, improve/implement and mitigate
This way handling the risks and making sure they are mitigated.
And at the same time âAcceptingâ that some of my controls needs Improvement - does that make sense?
If the control is not running as it should or has issues (i.e. only 50% effective), then logging an issue against the control will trigger the built in status âControl Has issuesâ. This will reflect throughout the connected risks/compliance requirements. you can then manage however you like via reports/notifications etc. Obviously if all issues are resolved and the control is funtiona as expected (via the audit process), issues are resolved and the status goes away.
That is interesting, I have never used the issues component, I will check it out. It definitely makes sense that this is reflected in the risks and related items.
I would suggest leaning into the custom status configuration and related notifications. Letâs say a control fails an audit - I believe the default custom status will apply that label to all related risks (or, you can create your own conditions). From there, you can trigger something to happen when that custom status is applied.
A workflow, for example -
Control Audit fails
Label applied
Maybe look for a way to trigger if the last review date of a risk was before the failed audit date (no idea if thatâll be available).
Notification to re-review all risks that have not been reviewed since an audit failure was triggered.
Of course, thereâs other ways - an audit failure may be considered an incident and you could do that workflow - attach the incident to relevant items and an early stage could be to re-assess risk, work it out, then once fix, re-assess again?
The main thing thatâs difficult for evaluating control effectiveness against risks is that controls are not always weighted equally - suppose youâve got 5 controls related to one risk, and one control fails. Itâs entirely possible, depending on the controlâs importance, that you could view it as no change to risk to a you sunk my battleship risk - thus, the approach should be more about triggering re-review when key events happen more than automating the change in control effectiveness.