Jump to content

AdamS

Members
  • Posts

    601
  • Joined

  • Last visited

  • Days Won

    23

Everything posted by AdamS

  1. Dale, not something I've played with in a very long time, but have you done any testing with Wine (or similar) applications that will let you run Windows programs on Linux? I seem to remember having mixed results when i last tried but that was some years ago and I imagine they have improved the software in that time.
  2. Either would work, I think it might come down to which is easiest for you to implement.
  3. I may have had too much coffee at that point quickly and easily isolating those items my clients deem relevant, based on the coding tags was the central issue here. Everything else just evolved from my crazy thought process that followed. Currently it's difficult to quickly isolate items based on the review coding 'tags' that our clients apply. I would break down a proposed solution like this. Currently we set up the coding tags via the 'Search' tab by creating new tags in the 'tags' facet, then we follow through with the coding layout etc. I would like to see a new facet in the 'Search' tab called 'Coding Tags' (or similar) and this is where we create the coding tags which will be used in the batch reviews. This will also simplify the coding layout creation as when you go to create the layout you will only be able to see tags that you have created in the 'Code tags' facet rather than the many other normal 'tags' which may exist, this will also make it super easy to identify items my clients have coded as relevant, and to create the relevant batches for export or for further review. I should point out that this has become particularly relevant for me as the vast majority of my cases the number of tags usually run quite high. I could go back and delete some of my setup tags at a later date, but my practice has been to keep these where ever possible as it allows me to duplicate searches and findings at a later date if I need to back track at any point.
  4. Graeme that's correct. I items tagged aren't visible anywhere to a restricted viewer, even the tag itself is not visible to a restricted viewer so they don't know anything has been hidden from them. I have used this on quite a few occasions with no issues reported.
  5. Hi Graeme, there is no way to remove select emails from a case at this stage, I suspect that might be difficult to achieve but I know they (vound) are looking at add the ability to remove source files in total. Is your ultimate goal to provide the data set back to your client in some format for them to ingest or merge back with their exchange server? Or are you looking to set up a review on Connect and you want those 500 emails to not be visible to anyone? Both can be achieved, but as you mention preserving tags I assume you want to keep the data within Intella. If that's the case you can use the ability to hide tagged items by using the 'Privileged' tag, then setup a restricted viewer hiding those tagged items from their account. Of course that hinges on using Connect to review the data. If you are talking about hiding something within Intella that won't be visible when using Intella Desktop, I don't think that can be done, at least I'm not aware of how it could be done.
  6. I would say that this is the most urgent addition I can think of for the following reasons. Currently when our clients review the documents via the 'Review' tab and apply their coding, the resulting hits appear mixed in with the regular tags Facet, this is fine when you have a very simple case with no other tags. However this is rarely the case as most of my matters require me to perform some pre tagging and scoping to setup the batches, in some cases there are hundreds of tags along with family tag sets etc. Once my client finished their review and they want to create export sets it becomes very difficult when viewing the tag sets to quickly isolate the tags that are related to the batches. This is compounded when they have multiple reviews going on and they want to create exports at different stages of the review. Currently to create an export I have to first isolate the tags by user, then ensuring batches and tags are showing in the preview window, sort by tags and then manually scroll through and apply a new tag to all the relevant items and create an export set from those tags. If we are talking about 4000 items then this can be a slow process. Everything up to that point is brilliant, simple and fast so that last stage I think needs to be just as simple and fast, ideally so our clients can create the export package without needing further assistance. To that end I would propose the following: A new facet called 'Review Code Tags' be created which is where the tags used for coding appear, completely separate from the other tags which were applied via the 'Search' tab This 'Review Code Tags' to be mirrored as it's own separate 'tab' up next to Search and Review in the top right corner. This will give us the ability to show or hide that tab based on user privileges we can setup. The 'Review Code Tags' tab would be laid out identical to the 'Review Tab' in that I mean you can see each batch with the same details as the Review tab with a few small differences, namely: The coding fields appear on the right hand side of the screen and all tags have a 'show/hide' option which you need to select for each tag When you 'explore' any of the batches you only see those items which correspond to the tags you selected in the previous step The coding fields on the right remain visible so you can change on the fly and what you view will change dynamically as they are changed Like the main 'Review' tab you can select all the batches or a selection of batches, however unlike the 'Review' tab we need to be able to display all items in a single batch/window The coding fields on the right hand side will also need to have the ability to select individual users or all users who applied the tags Lastly once all the relevant tagged items are selected and displayed the option to 'create export' needs to be available somewhere. I suspect that would be quite a lengthy process to implement but I can't overstate how much I think it's needed, however as a short term (and hopefully simple) addition I would offer that simply having a separate Facet for Review Coded Tags would be a huge step in the right direction.
  7. This has been highlighted by my client on a long running matter where we have created some 40 or so batches. As they are currently arranged alphabetically and due to the many different naming conventions we've used as the batches increased it's become apparent that having the ability to sort by date of creation would be a huge benefit and the easiest way to identify new batches. I know I could alleviate this by having better naming conventions and from this point on I will preface each batch name with a number, however in some instances the client is determining the batch names and for their own reasons will not want this prefix. Also having the ability to 'clean up' the preview tab would also be handy, maybe the archive option rather than simply changing the status to archive we could have a folder at the top of the list called 'archive' and when a batch is archived it's hidden from view on the main screen but can still be accessed in the archive folder, or even a seperate tab for archived batches.
  8. This would be particularly relevant when using Connect, but having the ability to pull thumbnails at set intervals from video files would be a fantastic addition. This would be a huge benefit for any Connect reviewed data as the need to download large video files over the internet to preview is incredibly time consuming and often not feasible. I would also go a step or two further and add some either manual or automatic configuration options based on the length of the file, something like below: Short video files of less than 5 mins have a thumbnail screenshot every 5 or 10 secs (configurable) Long video files 5 -15 mins every 10-20 secs (configurable) Very long videos 10, 20 or 30 sec intervals The other and more complicated alternative is for an automated smart screen shot generator which can make intelligent screen shots based on the content. I'm not exactly how these work but I've heard of software that can detect significant changes in the scene and also skin tone and will take more or less screen shots based on this.
  9. I was just thinking something over and thought I'd float the idea here to see if I get get some thoughts/feedback from people. I'm thinking about ways that we might process highly sensitive information in such a way that although I would be processing and hosting the data, I couldn't actually view the files themselves. I'm thinking along the lines of either illegal material (ie assisting the Police) or highly secretive material where a client may wish to conduct the review internally but without disclosing any data to me or other third parties. It got me thinking about how this could be accomplished but I don't know if it's even possible with the way Intella works internally or not, but here it is: Data Capture This would obviously need to be facilitated by the client and in such a way that they handed me an encrypted container with all the information for processing. There are many ways the data could be encrypted but rather than Intella having to develop a bespoke encryption package I though it might be easier to partner with an existing software such as 7zip. Using 7zip it's very easy to create an encrypted archive and relatively quick comparative to the size of the data. Intella would need to incorporate some mechanism that prompts for the archive password at the beginning of the indexing stage which would then allow Intella to index the data inside the encrypted archive. There is the obvious problem here that I would need the password to process the data, thereby negating the whole 'secrecy' component, however this could be overcome by having the client on hand at that stage to enter the password and visually confirm that I don't have access to the data. Indexing There would need to be an option for 'secure processing' or the like that we select at the beginning of the case setup, this option would have the effect of suppressing some of the statistical reporting of Intella to ensure no information is disclosed to the operator (ie me). The insight tab would be unavailable and no other data would be visible during or after the process has completed. Review Stage At the conclusion of the indexing process a secure database archive is created with a one time random password generated by Intella. The database can only be accessed using the password and strict logging protocols would need to show any and all access to the database. Although at this stage I could technically view the results, the audit trail will give comfort and proof that no one has viewed the data. The case can then be shared via Connect or viewed via Intella as needed. That's a very simplified process flow but I think you get the idea. I would see something like this as very uncommon but it could have interesting applications particularly to assist Police. I'd be very interested in if anyone else would find this type of process useful and especially interested in hearing from the Admins if something like that would even be possible and how hard it might be to implement? Possibly even as an add on component that is sold separately to reflect the time and effort that would have to go into building something like that?
  10. From time to time we get requests from clients to have their branding visible on a shared case. Would it be possible to have the ability to have branding visible on a case by case basis? I'm thinking of an option within a case to 'over ride' the branding logo that is set under the options so we could still have a 'default' logo that will appear when no others are linked.
  11. Jon that would be great. I know from experience that investigations are often fluid and what is relevant at the beginning may change as things progress. Having the ability to 'reopen' previously completed batches will allow an easy way to QA and make changes as needed. Just one comment though, we would need this ability to be persistent, by that I mean be able to reopen a batch as many times as is needed.
  12. Perhaps a new power/privilege could be created to allow a completed batch to be reviewed and changes made while still using the review tabs? I can see where the review would be handy without the need to have to either remove all existing tags and redo, or open the document in the preview tab before making changes.
  13. I too have had some bad experiences with Nvidia drivers, infact I lost about 2 weeks worth of work on a case when the driver caused a crash mid-indexing. As a general rule now I don't update the Nvidia drivers until I've had a chance to run some tests.
  14. I ran a few tests on a very small data set (1.6GB) and got some slightly conflicting results. This is far from definitive but interesting none the less. First Test: Ran indexing until it hit about 60% then stopped, process continued to run and finalize with 1905 items indexed. Indexed 'new data' only and a further 1 item was indexed bringing the total to 1906 Deleted the data and setup a new case and indexed the data, final result 1906 items. So far so good. Second Test (same data set): Ran indexing and almost immediately stopped the process, finished with 45 items indexed. Indexed 'new data' and after process finished a total of 1901 items indexed (5 missing/not indexed?) Deleted the data and setup a new case, indexed and final result 1906 items. With eagle sharp hindsight I realised later that I should have created some MD5 lists to try and identify which 5 files were not indexed. However I will repeat the test exactly the same again and see if this was a one off, or if I can duplicate the discrepancy.
  15. Thanks Lukasz, I suspected it wouldn't be a simple matter. I considered the 'index new data' feature but wasn't sure how closely it would look to determine if it is 'new' or not, ie will it drill right down to all the files and check if they have been indexed or not, or will it simply look at the parent folder name.
  16. I suspect this will be a difficult one but I can't express how much I would like this ability. I have several times been caught in the field where I'm using my laptop to run large indexing over 1TB or more and due to the fact I have to use USB drives it takes days to index. In many instances I have to use the laptop for other processes as well and quite often it means I have to cancel the indexing process and then re-index at a later date. Part of me thinks pausing shouldn't be that complicated as the software should know exactly where it's up to in the source data and it should be able to pick up where it left off, but if it was that simple you would have already implemented it. In any case, a huge please for the road map and sooner rather than later if it is possible
  17. I think if the original case had the 'cache original evidence items' option ticked then you can probably accomplish what you need, however if that option was ticked then you are likely out of luck. Are you able to check the case size against any paperwork or notes to see if the size looks like it might be right? For example one of my cases has just over 40GB of data that was the original evidence data, when I check my case data there is just over 60GB. This is not definitive as the indexing process creates data, however it may give you a clue. You can use the 'location' facet to show all items in the data set then just export to native format and ensure you select 'maintain folder structure'. You may need to run a little bit of testing on a small data set to check how it comes out but I think that will be the only way.
  18. Sounds like you have some nice gear, but as you discovered it can be tricky to get the HDD setup right with servers as they aren't really designed with that in mind. For my setup I have a beefy PC, 128GB ram, dual high end GFX cards with the following HDD setup. 1 - C drive is SSD 2 - Optimization drive is RAID 0 (2 x WD Black 2TB drives) 3 - Evidence drives are what ever drive I have handy with the original data but never USB, always connected via SATA in a hot swap cage. 4 - Case drives are generally WD Red drives of varying sizes also local drives. I have a server running Connect which has 144GB RAM and a RAID 5 array. I process on the PC then transfer across the network to the server for hosting. Both have Shadow Protect and the server also have Veam Endpoint backing up to a secondary location for added redundancy.
  19. Thanks Jon, that breaks it down nicely. For anyone considering doing this process it works well, and although there are quite a few steps once you've done it a couple of times it only takes a few minutes to get this in place.
  20. I agree with the difficulties when adding multiple sources, however I suspect it's done that way to give some granular control over having different processing options for different source types. For me, when I'm processing multiple sources I tend to have them all located in a single parent folder then I just select that folder for processing. As long as 'process subfolders' is ticked then there are no issues and you can have multiple different source types (PST, docs, pics etc). I'm not sure how having a pool of processed sources might work, my understanding is that when a source is processed a database is created as part of the indexing process. If you have multiple processed sources you are going to have multiple databases which won't work together. I suspect having the ability to merge cases would need to be in place first before this type of pool based sources would be workable, but I could be wrong.
  21. Jcoyne I've been running live backups on my Connect server for quite a while with no issues, provided your backup tool of choice uses Volume Shadow Copies then I don't think shutting down the services is necessary. The only down side is that if anyone is actively reviewing data at the time it is quite slow.
  22. Just a follow up reply as I know Jon will be putting something detailed here after we spend quite a bit of time talking on the phone about this. For anyone else who finds the need for this numbering it can actually be accomplished by creating Relativity Load files and then importing that back into the case as an overlay.
  23. I thought I'd just quickly reply here that I have actually spent quite a bit of time on the phone with one Vound guys discussing this issue and spit balling some idea's about how to implement it.
  24. I note that we can now see attachments at the bottom of the preview window when reviewing batches, just one final touch that is strongly needed is the visual cue on the list down the left hand side. A simple paperclip as part of the icon for those that have attachments will make it far easier and more intuitive to identify which files have attachments within a batch.
  25. We are looking at the other end of the e-discovery process with some of our clients, particularly around production, which is where this is coming from. Example is that a Witness statement may make reference to certain documents and they will not be in any order or pattern, so we may need to present many documents in a completely random order. Currently I would have to print them out and manually sort, which is fine if there is only a handful of documents, but when we go into the hundreds or thousands it becomes and issue. My thought is that the ability to import a keyword list that is already in the required order, and then maintain that document ordering through out the search and export phase. Possible a check box in the 'keyword' facet area that says 'preserve order' or 'preserve list order'.
×
×
  • Create New...