There is no avoiding software testing, and no matter at what stage of the software development life-cycle you are, testing will happen. Over the last 25 years I have shipped a lot of software, this ranges from COBOL through to PhP and many programming languages in between. I have worked on medical, insurance, financial, DRM, websites, Mainframes, Minis, UNIX, Mac, PC... whatever, the list goes on. During this time I have learned teams ship and test software differently. I wouldn't say their is one right way to do it, it just needs to be done! The extreme of this could be Google, they ship their software as beta. Essentially, they have their customers test their software, or at least let them know it is still a work in progress. This isn't to say Google doesn't do testing before they release to beta, they do. The other end of the spectrum would be medical or the military, where a bug could be fatal. Needless to say they test their software exhaustively before they ship. The main point being is if their is a bug in the software, it will eventually show itself. The question then becomes how much effort do you put into testing before you unleash your software upon your customers / users? This is an interesting question for it also aligns with your brand and what level of quality is associated with your brand. Google can release beta software and they seem to be doing alright as a company... people are used to them as an innovator and the idea of using beta software is a part of their brand. Your company may be associated with high quality, reliability and works first time, so releasing software with bugs may not be the best approach.
The great thing is that software testing is a mature practice and their are many options available to organizations who are involved with shipping software. There are many testing frameworks, approaches and standards available. I see there are two themes with software testing; the first being the traditional that was born out of some of the big quality standards like ISO and CMMI. And the second that came from the agile approach of developing software. I have experience within both themes, and my preference is with the Agile / Automated approaches.
CMMI is subscriptive not prescriptive. In other words it is a set of guidelines where your organization needs to have or create the practices that fit the guidelines. CMMI doesn't tell you how to do it (cause that depends on organizational culture) it just says that you need to do these things if you want to achieve a certain amount of maturity in relation to your organizations software development capability.
In my mind, Agile testing means automated testing. The idea being that testing is as automated as it can be and is built into the software development life cycle. In some situations an automated test suite is built before coding of the software has begun. Another would be while the software is being built a dedicated Quality Assurance (QA) team builds automated tests and works in concert with the developers to test as features are being buit, this is my preferred approach. All this is known as test driven development. The cool part of this is a projects can have thousands of automated tests and if a software build makes it through all the tests it will automatically be migrated to quality assurance for the next round of testing to begin. If the test suite fails everyone knows immediately and can address the issue based upon the knowledge of what code changes were most recently done. It can be very motivating to developers to have such a mature development environment.
No matter where your organization sits with all this, thorough testing needs to be done. And having testers who can as closely model end-users would be preferred. If your not that thorough during your software development process your users will quickly let you know where the flaws can be found. Hopefully, your organizational brand will not be negatively impacted by these user discovered flaws.