Connector FAQs

1. Have you implemented any solutions from your own company or other vendors? Please explain the preferred approach to handle infrequent, batch, near real-time, and real-time (synchronous) interactions. What is the typical volume of data and transactions that you handle?

Answer: We have integrated with many different systems through a combination of our enterprise API and our connector platform, which provides a pub-sub capability to other provider APIs that TransPerfect maintains. For the enterprise API, we prefer near real-time interaction, and the API connects directly to our internal enterprise resource bus and application-level queues. Through our connector platform, we can support any of the preferred synchronization approaches and have found that infrequent and batch approaches often suffice for most integrations using our serverless connector architecture. Typical volume depends upon the customer, but we have customers who regularly send us thousands or tens of thousands of data points and documents daily.

2. Does TI currently have any established connections? Is the interface adaptable to connecting with any system using that protocol? Does TI utilize SOAP or REST API, or is there another method in use? Are there any restrictions with the API? Can customers access the complete set of data, metadata, and content through published and documented APIs?

Answer: Yes. We can connect to any API interfaces, SFTP Dropbox, batch file upload, or similar ESB interconnect utilizing our connector platform. All our APIs are RESTful, and we use JSON for all data points. For a secure Dropbox, we can also intake CSV.

3. How are integrations shielded from changes to the structure of the data repository?

Answer: All of our integrations connect through our Enterprise Service Bus, which requires a standard JSON packet that can support additional data points for each record type as needed. In addition, our TI Connector framework leverages GraphQL to ensure the support of variable data models in the endpoint API.

4. Can TI integrate with a data lake, such as Denodo or Teradata?

Answer: Yes. Through our connector framework, we have integrated with several data lakes, including Teradata.

5. Explain how TI merges data elements based on metadata.

Answer: Our connector framework mirrors the endpoint API in its data model and translates data elements to the TI ESB data model using GraphQL, which supports 1 to 1, 1-to-many, and many-to-many relations.

6. How do you abstract the solution interfaces from the data model?

Answer: We use GraphQL to help us with this abstraction.

7. What services/utilities are available for bulk load and bulk export to support archival, legal discovery, mergers, acquisitions/divestitures?

Answer: We support several approaches. The TI reporting framework is built of denormalized domains from our various platform applications, using an Extract, Transform, and Load (ETL) that runs every hour. These domains can then be exported easily to AWS Athena, which may then be easily accessed as a data source through several means. We also support a simpler export of these domains to flat files on SFTP sites as well as a direct connection to AWS S3.

8. Have you implemented any solutions from your own company or other vendors? Please explain the preferred approach to handle infrequent, batch, near real-time, and real-time (synchronous) interactions. What is the typical volume of data and transactions that you handle?Answer: We have integrated with many different systems through a combination of our enterprise API and our connector platform, which provides a pub-sub capability to other provider APIs that TransPerfect maintains. For the enterprise API, we prefer near real-time interaction, and the API connects directly to our internal enterprise resource bus and application-level queues. Through our connector platform, we can support any of the preferred synchronization approaches and have found that infrequent and batch approaches often suffice for most integrations using our serverless connector architecture. Typical volume depends upon the customer, but we have customers who regularly send us thousands or tens of thousands of data points and documents daily.9. Have you implemented any solutions from your own company or other vendors? Please explain the preferred approach to handle infrequent, batch, near real-time, and real-time (synchronous) interactions. What is the typical volume of data and transactions that you handle?

Answer: We have integrated with many different systems through a combination of our enterprise API and our connector platform, which provides a pub-sub capability to other provider APIs that TransPerfect maintains. For the enterprise API, we prefer near real-time interaction, and the API connects directly to our internal enterprise resource bus and application-level queues. Through our connector platform, we can support any of the preferred synchronization approaches and have found that infrequent and batch approaches often suffice for most integrations using our serverless connector architecture. Typical volume depends upon the customer, but we have customers who regularly send us thousands or tens of thousands of data points and documents daily.

10. Does TI currently have any established connections? Is the interface adaptable to connecting with any system using that protocol? Does TI utilize SOAP or REST API, or is there another method in use? Are there any restrictions with the API? Can customers access the complete set of data, metadata, and content through published and documented APIs?

Answer: Yes. We can connect to any API interfaces, SFTP Dropbox, batch file upload, or similar ESB interconnect utilizing our connector platform. All our APIs are RESTful, and we use JSON for all data points. For a secure Dropbox, we can also intake CSV.

11. How are integrations shielded from changes to the structure of the data repository?

Answer: All of our integrations connect through our Enterprise Service Bus, which requires a standard JSON packet that can support additional data points for each record type as needed. In addition, our TI Connector framework leverages GraphQL to ensure the support of variable data models in the endpoint API.

12. Can TI integrate with a data lake, such as Denodo or Teradata?

Answer: Yes. Through our connector framework, we have integrated with several data lakes, including Teradata.

13. Explain how TI merges data elements based on metadata.

Answer: Our connector framework mirrors the endpoint API in its data model and translates data elements to the TI ESB data model using GraphQL, which supports 1 to 1, 1-to-many, and many-to-many relations.

14. How do you abstract the solution interfaces from the data model?

Answer: We use GraphQL to help us with this abstraction.

15. What services/utilities are available for bulk load and bulk export to support archival, legal discovery, mergers, acquisitions/divestitures?

Answer: We support several approaches. The TI reporting framework is built of denormalized domains from our various platform applications, using an Extract, Transform, and Load (ETL) that runs every hour. These domains can then be exported easily to AWS Athena, which may then be easily accessed as a data source through several means. We also support a simpler export of these domains to flat files on SFTP sites as well as a direct connection to AWS S3.