Jump to content

jon.pearse

Administrators
  • Posts

    255
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by jon.pearse

  1. Hi George, Are you using the latest version of Intella (2.0.1.1)?
  2. Hi Jason, We are getting close to the 2.1 release. This has the ability to remove complete sources. A snapshot should be available in approx a week. We will let you know when this is available.
  3. Hi George, What type of system/spec to you have? Also, have you moved the evidence or case data from when you initially created the case? These speeds do not sound right. It could be an I/O issue. Are the case and evidence drives local to the system (e.g. mounted within the system). USB connected drives can be a factor and we recommend not to use them. Jon
  4. Answer can be found in: https://support.vound-software.com/#knowledge_base/1/locale/en-us/answer/64/
  5. Hi gcahlik, In the current version there is no automatic function to export tagged items into another case. The work around is what you have already mentioned - export the items in original format, then index these items in a new case. May be others on the forum have done it a different way, and can share their workflow with us? This question of privileged material has been asked before. It is likely that we will include such functionality in a future version of Intella. Jon
  6. Introduction There are many vendors offering cloud services at reasonable rates. The benefits are that the customer can pick and choose what they want to use, based on their business requirements. The customer does not have to maintain expensive IT infrastructure, which reduces the number of IT staff, and cost for the firm. Given the benefits, many firms are moving all of their IT services to the cloud. We are now starting to receive more and more queries regarding running Intella or Intella Connect in a cloud environment. A cloud server is basically just another computer. Intella and Intella Connect should run fine on these systems provided that the system's hardware/software meets, or is better than, the recommended specification. The achilles heel of running Intella in a cloud, is that we used dongle based licensing. Basically a physical license dongle needs to be plugged into the system running the Intella software. The issue with the cloud server is that it is normally vast distances away (e.g. overseas) from the end user. There is no way to walk a dongle 'down the road' to the cloud vendor. Even if the vendor is located just 'down the road', they are likely to have policies in place which restrict physical devices (such as a license dongle) from being plugged into the host system. This post discusses using a cloud system to run Intella or Intella Connect, and to be able to obtain a license located on a physical dongle which is not plugged into the cloud. Physical Layout Because Intella and Intella Connect use physical dongles for licensing, a dongle (with a current license) still needs to be available for these applications to operate. That said, the dongle does not necessarily need to be plugged into the cloud system. We provide network dongles that are designed for applications to obtain a license over a network. Furthermore, the network dongle can be configured to work across different subnets. If the cloud system has access to your local system, then the network dongle can be plugged into your local system. The cloud system is then configured to obtain a license from the dongle in your local system. Prerequisites Before you begin with configuration, there are some things that check: Make sure that Port 1947 is not blocked by any firewall (or other security software) as the Sentinel LDK License Manager uses it for communication. Make sure that the server and client machines are able to ping each other. You may need technical help by your IT and Network administrators during this setup. Make sure that they are available to assist you should they be required. Local System Configuration The local system that holds the network dongle needs to have the Sentinel LDK License Manager installed, and running on it. This is as simple as installing Intella or Intella Connect on the local system. The necessary drivers and applications for the License Manager will be installed during the Intella installation. Install Intella or Intella Connect on the local system. Once the installation is complete, test that everything is setup and working properly by running Intella or Intella Connect. If you encounter any HASP or licensing issues, you can troubleshoot these issues with the information in this article. Once you have confirmed that Intella starts with no HASP or dongle issues, you need to make some changes in the Sentinel Admin Control Center (SACC). The first setting is located in the 'Access from Remote Clients' tab which can be access at this link: http://localhost:1947/_int_/config_from.html. Make sure that the setting for 'Allow Access from Remote Clients' is checked. Click Submit if you have made changes. 4. The second setting is in the Basic Config page (http://localhost:1947/_int_/config.html). Make sure that the setting for 'Allow Remote Access to ACC' is checked. Click Submit if you have made changes. Cloud System Configuration When the setting on the local system are complete, you can go ahead and configure the cloud system. The steps are similar to the local system. Install Intella or Intella Connect on the cloud system. You now need to configure the SACC so that the cloud system can access the local system to pick up a license. The first setting is the same as shown at step 4 for the local system (above). In the Basic Config page (http://localhost:1947/_int_/config.html), make sure that the setting for 'Allow Remote Access to ACC' is checked. The second settings are located in the 'Access to Remote License Manager' tab (http://localhost:1947/_int_/config_to.html). (i) Check the setting for 'Allow Access to Remote Licenses'. (ii) Check the setting for 'Aggressive Search for Remote Licenses'. (iii) Enter the IP address for the local system in the 'Remote License Search Parameters' field. Once done, click Submit. 4. To test the communication between the cloud system and the local system, open a browser on the cloud system, and type the following in the Address bar: http://localhost:1947/_int_/devices.html. Verify that you are able to view the network dongle which is plugged into the local system. If you can see the network dongle, then this verifies that the cloud system is able to communicate with local system at a HASP level. 5. When Intella or Intella Connect is opened on the cloud system, a license will be obtained from the local system. This will allow the application to run.
  7. Hi Graeme, I'm not sure why the hits are not highlighted. I will check with our technical team. In the interim you could run several searches to narrow down and highlight the hits for the search criteria. 1) Run a search for research~ and run a separate search for centre~ 2) Run a search for "research~ centre~" 3) Review the items in the intersecting cluster The hits will be highlighted in the document from step 1. Step 2 narrows the number of documents down so you should only get relevant hits for the phrase. I hope this helps Regards Jon
  8. Hi llanowar, Can you post the first 2 lines of the DAT file, and also the first line of the OPT file please. The file/folder paths are what we are interested in. If there is any confidential information in the second line of the DAT file, this can be changed or redacted before you post it.
  9. Hi, I see that this was resolved in your support ticket. I'm not sure if you have seen this it not. We have some information, and a video on load files and overlays at the link below. This may be useful if you are new to creating load files in Intella. http://community.vound-software.com/index.php?/topic/402-creating-a-load-file-in-intella/
  10. Introduction: The Tasks feature allows the user to run predefined processes either directly after the indexing process has completed, or at a later stage. In this article we will look at using the Tasks feature to automate the searching, filtering and export processes associated with an eDiscovery matter. Note: This example is for demonstration purposes only. Your searching, filtering and export requirements will likely be different to what is used here. Also, we start the tasks from the point of running searches. This presumes that all other required checks and pre-processing tasks, such as investigating items that were not processed (e.g. OCR and encrypted items) have been completed. Basic ED processing/filtering steps: For this example we will cover the following basic eDiscovery processing steps. · Search/filter the data in the case using a keyword list and a date range. · Show the top level parents for the items that were respondent to the searches. · Deduplicate the top level parents. · Return all family items. This shows email attachments and all embedded items. · Remove the irrelevant items such as folders, containers, embedded items and non document types. This leaves the remaining items which are ready for review. · Export the document ready for review to a load file. Creating the Tasks: In Intella Desktop the Tasks wizard is located under the File menu. Start the Task wizard and click on the New button in the Tasks window. Type a name for the task. I have used a prefix of '01 -' in the task name. Using a sequential prefix makes it is easier to see the order for the tasks. Step 1 allows you to search for items. In this case we will select the 'Keyword list' option from the dropdown. Once done we can either select a keyword list which is already in the case, or we can load a keyword list from a file. Next, click on the 'Add' button to the right and select 'Date' from the dropdown. Enter the date parameters. Step 2 allows the user to refine the search. We don't use Step 2 for this task. Step 3 allows the user to apply actions to the results. In this case we are going to tag the respondent items. Hierarchy tags are available so we will set all of the output tags under a top level tag named 01 - eDiscovery Processing, e.g. 01 - eDiscovery Processing/01.01 - KWSearch-DateRange The second task is configured to return all of the parent items from the items which were respondent to the search and date filter. Like we did for the first task, click on New and type a name for the task. In Step 1 we want to select the tag that holds the items that were respondent to the search criteria. This is basically the starting dataset to be filtered further. Note that we tagged the results in the first task into a hierarchical tag. This tagging structure must be maintained for the starting dataset in Step 1 of this second task. E.g. 01 - eDiscovery Processing/01.01 - KWSearch-DateRange Click the Add button for Step 2 and use the dropdown to select the 'Identify parents' option. Select the Top level parents radio button and check the check box for 'Add items that are already top level parents'. In Step 3, tag the items into a new tag under the tag group '01 - eDiscovery Processing', e.g. '01 - eDiscovery Processing/01.02 - Show Top Level Parents'. Once the top level parents have been identified, we need to create a task to deduplicate the top level parents. Create a new task and name it appropriately. In Step 1 select the tag that holds the top level parent items. Click the Add button for Step 2 and use the dropdown to select the 'Deduplicate results' option. In Step 3, tag the items into a new tag under the tag group '01 - eDiscovery Processing'. Now that we have our top level items deduplicated, we need to bring back the family for those items. Create a new task and name it appropriately. In Step 1 select the tag that holds the deduplicated top level parent items. For Step 2, click the Add button and use the dropdown to select the 'Identify children' option. Select the 'All descendants' radio button, and check the 'Ignore folders' checkbox. In Step 3, tag the items into a new tag under the tag group '01 - eDiscovery Processing'. The next step is to add the top level parents and family items into one tag, then clean up the dataset to remove any embedded items and containers such as zip files and PST files. We have already extracted the content of these files so we no longer need the container or zip files. Click on New and type a name for the task. In Step 1 select the Match dropdown, and select 'Any' from the list. We need to set this to 'Any' as we will be adding all of the date from the two tags, we don't want just the intersection of the tags. Still on Step 1, enter the tag name that holds the deduplicated top level parent items. Also, click the Add button to the right and enter the tag name that holds the family items. Click the Add button for Step 2 and use the dropdown to select 'Suppress irrelevant items' In Step 3, tag the items into a new tag under the tag group '01 - eDiscovery Processing'. We now have all of our respondent data de-duplicated and the family items returned. Although we have cleaned out the irrelevant items, there may be other file types in the dataset that we need to clean out. These could be items located in System and Others categories under the Type facet. We need to create a saved search for this task as we will be using Exclude searches. To create the saved search for this process: · Run a search over the tag the contains the items with the irrelevant items removed. · Run Exclude searches over the Containers, System and Others categories under the Type facet. Once complete, click on the Save button in the Results box on the right, and enter a name for your saved search. E.g. '02 - Excluded: System, Others, Containers' Once done, open the Tasks wizard again. Click on New and type a name for the task. In Step 1 select the Saved Search option, and use the second dropdown to select the newly created Saved Search. Note that a Saved Search can also be loaded from a file. We don't use Step 2 for this process. In Step 3, tag the items into a new tag under the tag group '01 - eDiscovery Processing'. This concludes the searching and filtering tasks to provide a dataset ready for review/export. Exporting the results: Now that we have our dataset ready for export, we can go ahead and create a task to export the data to a load file. Click on New and type a name for the task. In Step 1 select the tag option, and enter the tag name that holds the items which are ready for review. We don't use Step 2 for this process. In Step 3, select Export from the dropdown. Select your pre-configured load file template from the list. Once the tasks have been created and run, they should look like this.
  11. Hi, We have this improvement to the GMail collection feature scheduled for a future version. This may be released in version 2.2.
  12. Hi, It looks like the skype database was not processed with the restrictions in place. Along with the Skype information from the file types that you already selected, you also need to include the skype main.db database file. This can be done by clicking the 'Add file name' button and typing main.db as the value.
  13. Hi, Did ScanPST detect any issues? Can you submit a support ticket as we may need to look at the case logs. http://support.vound-software.com/Default
  14. Hi, there may be an issue with the OST file. You can try the ScanPST.exe utility provided by Microsoft to check and fix issues in PST/OST files (always make a backup of the file first). This tool is usually provided with the Outlook product.
  15. Hi, There could be a number of reasons why a 500 error is shown. We may need to look at the log files for the case. Can to submit a support ticket detailing the issue, and what specific searches you were doing when it occurred please.
  16. HI Rashid, That is the best approach. We keep up to date with the latest HASP drivers for our software. It is possible that the old EnCase dongle does not support the latest HASP drivers. If that is the case, then a new EnCase dongle should fix this issue.
  17. Hi, Currently individual page numbering (such as bates page numbering) is only supported when exporting to a load file. We are looking to add this feature to PDF exports in a future version.
  18. Thanks for your input Adam. We are looking at both, complete source, and arbitrary item removal from cases. Initially we will work on source removal, item removal would likely be further down the road as it is more complex.
  19. Hi, Thank you for your suggestion. We see the value that this feature could provide. We have added this to our list of features for future releases.
  20. Hi Adam, Thanks for your idea re branding. This feature could be added to a future version.
  21. I have discussed this with the dev team and they are looking at options where batches can be reopened for QA purposes. One option is to add another state called "Reopened" which would allow a user (the person doing the check) to change coding to batches once they are completed. Then privileged users could mark it as "Completed" again once done. We are also looking at adding the ability to capture the exact trail of coding decisions. This would be useful for audit purposes.
  22. Answer can be found under: https://support.vound-software.com/#knowledge_base/1/locale/en-us/answer/70
  23. Creating an overlay in Intella We have received a number of support tickets regarding creating load file overlays with Intella. An overlay is used to add additional data to a load file which has already been ingested into a review platform. For example, the case may be that a load file was created in Intella and provided to the end user. The end user ingests the load file into their review system (such as Relativity or Concordance) and they discover that some metadata fields (which they required) are not in the load file. The solution is to create an overlay so that the missing metadata fields can be overlaid on to the records that have been loaded into the review platform from the original load file. Intella does not have an 'export overlay' option as such. That said, you can export a trimmed down version of a load file that can be used for the purpose of overlaying metadata onto existing records. The so called 'overlay' will contain the missing metadata fields for each record (based on the original load file that was provided to the end user) and will be overlaid onto the existing records by using a common reference or identifier such as the DocID. The key point mentioned above is that the records are updated by using a common reference such as DocID. If an 'Export set' was created when the original load file was exported from Intella, the Export set can be used as the common reference and creating the overlay is quite simple. This post shows the steps involved. However, if an 'Export set' was not created during the load file export, then this can be many times harder to do. If that is the case, then really the only other unique identifier you could use (if it was included in the original load file) is Intella's ItemID. In these situations the review platform may allow flexibility as to which field can be used as the reference when loading an overlay. If this is the case then the ItemID can be used without a great deal of effort. If the review platform only allows the DocID to be used for the reference, then the DocIDs will need to be added to the overlay manually. This could be done by matching the ItemIDs in the original load file with the ItemIDs in the overlay, and copy and paste the load file DocIDs for the matching records in the overlay. This will be a manual process and is subject to human error, so adequate quality control and checks are required. This post is not for this situation, but rather explains how to create an overlay when an Export set has been created. Below are the steps to create an overlay. Note that this is an example only, and your overlay may require different fields and data. You should build your overlay to match the requirements. 1) You start by selecting the same items which are in the original load file. As creating a load file, right click on the items and select 'Export highlighted items'. 2) The 'File naming and numbering' settings could be anything. We will discard these setting so they are not important. 3) Under 'Load file options' we can turn everything off (unless you are adding date/time fields). Turn off natives, text and images as these would have been included in the original load file, so they are not needed here. 4) Under the 'Load file chooser' options create a field for the Export set that was used for the original load file export. Also add the metadata fields that you want in the overlay. In my example I have added the fields for the Export set, To, From, Subject, Primary date/time and File name. Your requirements will probably be different, but the important thing here is to make sure that the Export set is included as this will be required for matching the records. 5) The 'Redacted items' screen will be greyed out as we are not exporting natives, extracted text or images. Run the load file as normal. 6) The load file (or overlay) should be very fast to create as (in this example) we are basically creating a text file and we don't need to render images, extract text or provide natives. A DAT file will be created with its delimiters for field and column separators. The review platform should be able to accommodate these delimiters when the overlay file is ingested. Below is a sample of what the overlay file should look like (I have opened this in Excel and removed the delimiters). You can see that I have metadata columns for the To, From, Subject, Primary date/time and File name fields. You can also see that the Export set is included. This field may need to be renamed in the overlay file to match the field used for importing in the review platform.
  24. Hi, No, this feature did not make it into the 2.0.1 release. Version 2.0.1 is basically a maintenance with very little new features. This feature is on our road map and will be implemented in the next version, or the version after, but at this stage we can't commit to which version that will be.
  25. Hi Adam, We have a work around for keeping the order for items that you have KW searched for. Because there is currently no setting to keep the results of searches in the same order as the items in the KW list, we need to populate a column which will retain this order. We can do this by searching and saving the hits into hierarchical tags. Placing the search hits (from a KW list) into hierarchical tags automatically can be done by using a CSV KW list and the Auto-tag feature in the KW list Facet. A CSV KW list will allow you to enter search terms in column A, and you can also designate a tag for the hits of those terms in column B. You can use hierarchical tags in the KW list by entering the slash (/) between the top level tag and child tag (see sample below). In row 1 of the example above, the search term ABC.001.0000011 will be searched and the results will be placed into a tag named 900001, which is in a top level tag named TagOrder. I have used 900001 as the tag names are sorted in alphabetical order, not numeral order. Using numbers such as 1, 2, 3, 4, etc. for tag names will result in the order looking like this 1, 11, 12, 13, 14..... 2, 21, 22, 23 etc. You can see in the image below that the Export set IDs can retain their order by sorting on the TagOrder column in the table view. I hope this helps. Regards Jon
×
×
  • Create New...