In the world of text analytics, the same analytics workflows tend to be conducted not once, but repeatedly. You might for example have created the perfect pipeline for analyzing online conversations about your brand, and want to run it monthly to see how themes and sentiments are changing.
Applying the same steps each time is both time-consuming and inefficient. Our latest feature eliminates this issue. You can now save and re-run pipelines as well as save and reuse models in Dcipher Analytics!
Once you’ve built your pipeline, you can save it with a few clicks and reuse it in your next project. The step-by-step guide below walks you through the process.
1. Create your pipeline
In your new project, create your pipeline by applying the necessary operations for a customized model.
As an example for creating a pipeline, the Pre-processing Wizard is used to clean the text, and sentiment analysis is then applied. For a detailed explanation, you can check the Pre-processing Wizard and Sentiment Analysis articles in the Help Center.
2. Save the pipeline
You can then save the pipeline as a project template from the bookmark button, which is placed in the top-right corner of the page.
* You can also list previously created project templates by clicking this button.
Name the project template and optionally add a description before saving and click “Done.”
3. Re-run the pipeline
After creating your project template, you can re-use this template while creating a new project. Click “Your customized project templates” from new project template selections to use your saved pipelines in the template project.
Then select a previously saved project template according to your preference and click on “Continue.”
You can replace the dataset with a new one and check the set pipeline in the project template. After you have made your necessary changes, click on “Start.”
You have now re-used the saved template successfully! From now on, you can save a lot of valuable time the next time you need to rerun the same pipeline. You can also create your customized pipelines and use them for different datasets efficiently.