How to Implement Data Capture Processing

 Note: Before you begin, see How to Add the Library to Your Xcode Project.

To implement the document data capture scenario, follow these steps:

  1. Implement a delegate conforming to the RTRDataCaptureServiceDelegate protocol. The delegate will handle messages from the data capture service. Here are the recommendations on what its methods should do:
  2. Create an RTREngine object using the sharedEngineWithLicenseData: method. The method requires an NSData object containing your license data. For example, you can use dataWithContentsOfFile: to create a data object, then pass this object to the sharedEngineWithLicenseData: method.
  3. Use the createDataCaptureServiceWithDelegate:profile: method of the RTREngine object to create a background recognition service. Set the type of document you are going to capture using the profile parameter — for example, "IBAN". The service is created and will further work with this profile.
    Only one instance of the service per application is necessary: multiple threads will be started internally.
  4. We recommend calling the setAreaOfInterest: method to specify the rectangular area on the frame where the document is likely to be found. For example, your application may show a highlighted rectangle in the UI into which the end user will try to fit the page they are capturing. The best result is achieved when the area of interest does not touch the boundaries of the frame but has a margin of at least half the size of a typical printed character.
  5. Implement a delegate that adopts the AVCaptureVideoDataOutputSampleBufferDelegate protocol. Instantiate an AVCaptureSession object, add video input and output and set the video output delegate. When the delegate receives a video frame via the captureOutput:didOutputSampleBuffer:fromConnection: method, pass this frame on to the data capture service by calling the addSampleBuffer: method.
    We recommend using the AVCaptureSessionPreset1280x720 preset for your AVCaptureSession.
    Also note that your video output must be configured to use the kCVPixelFormatType_32BGRA video pixel format.
  6. Process the messages sent by the service to the RTRDataCaptureServiceDelegate delegate object. The result will be delivered via the onBufferProcessedWithDataScheme:dataFields:resultStatus: method:
    • an RTRDataScheme object; use its id property to determine what recognition scheme has been applied to the document (some profiles provide two or more recognition result schemes), and its name property to display a human-readable description to the user, if needed.
       Important! If nil is passed instead of a valid RTRDataScheme object, the data scheme has not yet been matched, which may mean that the document the user is trying to recognize is not a passport. In this case, the results are not usable.
    • an array of RTRDataField objects, each representing one of the fields found and recognized. An RTRDataField object provides the identifier and the human-readable name for the field, the field text, and its location.
    • the result stability status, which indicates if the result is available and if it is likely to be improved by adding further frames. Use it to determine whether the application should stop processing and display the result to the user. We do not recommend using the result until the stability level has reached at least RTRResultStabilityAvailable and the data scheme has been matched.
  7. Save the results for the recognized page. Call the stopTasks method to stop processing and clean up image buffers. The data capture service keeps its configuration settings (such as area of interest) and necessary resources. The processing will start automatically on the new call to the addSampleBuffer: method.

See the description of classes and methods in the API Reference section.

3/2/2022 12:59:15 PM

Usage of Cookies. In order to optimize the website functionality and improve your online experience ABBYY uses cookies. You agree to the usage of cookies when you continue using this site. Further details can be found in our Privacy Notice.