We performed a comparison between OpenText Silk Test and Tricentis Tosca based on real PeerSpot user reviews.
Find out in this report how the two Functional Testing Tools solutions compare in terms of features, pricing, service and support, easy of deployment, and ROI."The feature I like most is the ease of reporting."
"The statistics that are available are very good."
"Scripting is the most valuable. We are able to record and then go in and modify the script that it creates. It has a lot of generative scripts."
"The ability to develop scripts in Visual Studio, Visual Studio integration, is the most valuable feature."
"A good automation tool that supports SAP functional testing."
"The scalability of the solution is quite good. You can easily expand the product if you need to."
"The major thing it has helped with is to reduce the workload on testing activities."
"The most valuable features of Tricentis Tosca are the ease of use, you do not need to program if you do not want to."
"The tool can be handled without any knowledge in parameterisation, especially the TestCaseDesign which makes the tool mighty and stable."
"This solution is easy to use for everybody, including those who are not IT-educated."
"Image recognition: It has allowed us to automate a GUI section of our product which involves drawing different topologies."
"I am impressed with the product's script test."
"The Model-Based Test Automation is the most valuable feature, where you can create reusable components. Even though we are using a scriptless automation tool, there still needs to be an understanding of how to create reusable components and how to keep refactoring and how to keep regression, the test scripts, at an okay level. We are coupling Tosca with some other risk-based testing tools, as well, but automation is primarily what we're using Tosca for, the scriptless, model-based technology which is driving automation for us."
"For beginners, the product is good, especially for those who are interested in the quality side of software testing."
"We like the fact that it works across mobile, desktop, web, and APIs. Due to this, the solution has a broad range of applications."
"The pricing is an issue, the program is very expensive. That is something that can improve."
"The support for automation with iOS applications can be better."
"We moved to Ranorex because the solution did not easily scale, and we could not find good and short term third-party help. We needed to have a bigger pool of third-party contractors that we could draw on for specific implementations. Silk didn't have that, and we found what we needed for Ranorex here in the Houston area. It would be good if there is more community support. I don't know if Silk runs a user conference once a year and how they set up partners. We need to be able to talk to somebody more than just on the phone. It really comes right down to that. The generated automated script was highly dependent upon screen position and other keys that were not as robust as we wanted. We found the automated script generated by Ranorex and the other key information about a specific data point to be more robust. It handled the transition better when we moved from computer to computer and from one size of the application to the other size. When we restarted Silk, we typically had to recalibrate screen elements within the script. Ranorex also has some of these same issues, but when we restart, it typically is faster, which is important."
"Could be more user-friendly on the installation and configuration side."
"The solution has a lack of compatibility with newer technologies."
"Everything is very manual. It's up to us to find out exactly what the issues are."
"They should extend some of the functions that are a bit clunky and improve the integration."
"While the initial setup was straightforward, we required assistance with the configuration to ensure that everything was done correctly."
"Not being able to mask test data in relation to testing data management, in my opinion, is also a limitation."
"Very difficult to get information about licensing costs."
"Tosca's reporting features could be better. Tricentis had a reporting tool called Analytics, but it didn't function properly after they reworked it. After that, they tried a new approach with key-tracing, and that didn't work."
"The tool lags in client-based applications. We have also encountered issues with the features in integrations."
"The integration with mobile testing could be useful."
"ScratchBook execution needs to be improved as Tosca crashes multiple times."
"Tricentis Tosca could improve on the ease of use. There is a steep learning curve. The reporting section could be better and some of the new features could be simplified. Additionally, the user management of the client and the server are confusing. There should not be two."
Earn 20 points
OpenText Silk Test is ranked 26th in Functional Testing Tools while Tricentis Tosca is ranked 1st in Functional Testing Tools with 98 reviews. OpenText Silk Test is rated 7.6, while Tricentis Tosca is rated 8.2. The top reviewer of OpenText Silk Test writes "Stable, with good statistics and detailed reporting available". On the other hand, the top reviewer of Tricentis Tosca writes "Does not require coding experience to use and comes with productivity and time-saving features ". OpenText Silk Test is most compared with OpenText UFT One, Selenium HQ, OpenText UFT Developer, Apache JMeter and froglogic Squish, whereas Tricentis Tosca is most compared with Katalon Studio, OpenText UFT One, Worksoft Certify, Postman and Testim. See our OpenText Silk Test vs. Tricentis Tosca report.
See our list of best Functional Testing Tools vendors, best Regression Testing Tools vendors, and best Test Automation Tools vendors.
We monitor all Functional Testing Tools reviews to prevent fraudulent reviews and keep review quality high. We do not post reviews by company employees or direct competitors. We validate each review for authenticity via cross-reference with LinkedIn, and personal follow-up with the reviewer when necessary.