Balancing Security and Flexibility

EmFroese
New Contributor II
New Contributor II

Would you be interested in AI that monitors for security threats in the background and takes action automatically? Or do you prefer to control all changes manually? Essentially, how comfortable are you with an “auto-pilot” for security that intervenes when needed (for example, auto-quarantining a malware-infected device or adjusting a filter rule)? What would make you trust such a system?

 

2 REPLIES 2

ktrojano
Release Candidate Programs Tester

Yes, definitely! I would expect that an automatic action would be quicker than manual intervention. As long as the “auto-pilot” has an option for disengaging or reversing the action if needed I would be comfortable with this option. I would trust this type of system if it came from a trusted vendor. One that has worked with the Apple platform for years and knows the importance of beta testing before product release. 

Kimberly Trojanowski

Jason33
Contributor III

I'd be comfortable with it, and I assume the industry is heading that way sooner or later. I guess for me its 6 one way or half a dozen the other. Would the AI be smarter than the user and 'know' - hey, that .dmg you downloaded actually is not legitimate? Would it prevent a user from clicking on a bad Google link? Would it be more proactive or reactive? Post incident, is there any analysis or summarization provided? I'm not a cyber security engineer, so my thoughts on this probably arent going to be what someone that does that for a living will be.