Any oft repeated manual process is a good candidate for automation.
The question for me isn't whether to automate acceptance tests or not, but how we keep them relevant and manageable over time.
One way is to acceptance test the features that are most important to you, but how do you discover what features are most important to you?
If parts of your product enabled different revenue streams, you could bias your acceptance tests towards features that generated the most revenue.
If you had statistics on how your product was used out in the wild, you could bias your acceptance tests towards the most commonly used features.
Or I guess you could just bias your tests towards the stuff that keeps bloody breaking !
But how do we trim out the fat?
One possible answer could be to define a budget for the automated acceptance tests, you could perhaps define a ratio of lines of code to minutes of execution time for acceptance testing.
You now have a simple metric to allow you to discuss what is most important to you.
Expensive test, not much value, drop it.
Expensive test, pretty valuable, see if you can get it cheaper without losing to much value (refactor).