Jump to content

AdamS

Members
  • Posts

    598
  • Joined

  • Last visited

  • Days Won

    22

Everything posted by AdamS

  1. Neil it would be a simple matter to use Intella's inbuilt ability to detect foreign language documents, isolate the 5000 Spanish documents, then export only those documents into the load file.
  2. Chris the only time I've seen this before was when the indexing process was interrupted or failed to complete properly. I would try reindexing (if you haven't already). Also for urgent issues I'd always suggest starting a ticket or emailing support directly, as much as the team try to monitor and reply here I have noticed that from time to time they are not as active as I'd not they'd like to be. I suspect close to update release time they get caught up in the work and can't be here as much....
  3. Hi Jon, do you know if this is high on the priority list to fix, could I humbly suggest bumping it up if it's not? While the work around is simple enough for a couple of documents it's not really something I'd like to contemplate on your average discovery matter with thousands (tens of) documents. Added to that I feel safe in thinking just about any set of documents is going to have this issue. Is there a way for us to identify documents which may have landscape pages en masse with Intella currently?
  4. Corey do you use batching to assist your customers with the review process? I think you could completely exclude irrelevant items when creating the batches, not precisely what you are asking for I know but it's a workable solution, this method would also negate the need for the client to worry about the dedupe button.
  5. Okay Corey, I've found myself with some time on my hands so ran a test on some existing data which I had already used to create a Relativity load file. Below are the steps I took and the results achieved when importing the overlay back into the case. First step is to setup custom columns, I like to do this prior to indexing my source data. Basically add sources but do not index, then once the custom columns are setup go back and 'reindex' the data. You don't have to do this, you can index your data then add custom columns, however you will need to reindex after adding custom columns so you are better off time wise to index only once if you can set your workflow that way. 1. Custom Columns For what you are trying to achieve I would setup the following custom columns (the names are not important, those are just what I use). DOCID PARENT_DOCID ATTACH_DOCID BEGINGROUP ENDGROUP I set these up without mapping them to any specific fields in the Properties or RAW DATA tabs, so I just use the options like the pic below, this just creates a column without any data populated. They need to be kept blank in this instance so I can map them when I overlay the loadfile data. 2. Loadfile Creation After the data is indexed and the loadfile data selection is done it's time to create my loadfile, ensuring I use the same names as my custom columns for continuity, the pics below show the field selection options I chose for these fields when creating the load file: DOCID - Default field created when you setup the loadfile, depending on your naming conventions it will be something like PREFIX.000001.0000023 PARENT_DOCID - exactly what it sounds like, the same naming convention but reflecting the parent document ATTACH_DOCID - as you would expect, the child items BEGINGROUP - This is the first file in a group, which will be the parent ENDGROUP - The final file in a group, ie the last child item For the purpose of this discussion you can ignore the BATES numbering columns. The below picture is an excerpt of my load file showing the relationships are captured at the loadfile creation using the DOCID numbers. 3. Import .DAT Loadfile as Overlay Now go to File-->Import Loadfile Change the Import Operation to "Overlay", then use the "..." button to navigate to your .DAT file from the loadfile you just created and click 'next'. You should now have something like the screen below Now we need to map the columns from the loadfile to the custom columns you created. Click 'Next' then begin the mapping exercise You don't have to map all the fields, just the ones you need, in the right hand box under 'Intella Columns' locate the custom fields you created, highlight then click the left blue arrow to move that across to match the corresponding load file field you want to map it to. Once you have all the desire fields mapped click 'import'. IMPORTANT - it's very important the you have the overlay options (top left) field correctly setup otherwise the import will not work. I have use the MD5 hash as this is part of my loadfile, however the ITEM_ID would be the logical choice provided this is part of your loadfile. Screenshot below shows my new fields exposed buy unpopulated at the beginning of the overlay import (the column with numbers is the Item ID field). This overlay import was for 26k items and took about 15 minutes. The screenshot below shows after the process has completed and you can see the new fields have been populated. Now to address your specific needs This is now achievable simply by sorting by the DOCID field 1. Sorting by DOCID achieves this 2. The BEGINGROUP is what you want here I think, this is the parent file and first file in a family 3. I'm not sure I understand the goal here. The very nature of hashes is to be unique so it would go against all forensic procedures to have different files with the same hash. What sort of custom deduping are you wanting. Perhaps this can be achieved now with the custom fields. This is another one I'm not sure that I'm understanding correctly. If you mean if you have 2 emails which both contain exactly the same children items? If that's the case a normal dedupe would take care of that now as all items would have duplicates. If you mean you have 2 identical parent emails but then have slightly different children and you want to only keep one of those parents and the resulting children I believe you can accomplish that now by using top level deduplication then tagging only children of the deduped top level items. Perhaps you could elaborate on this point a bit as I'm not sure what your goal is here. Intella has a fairly strong customisation ability for loadfile creation already, and in fact I found it's just as extensive (if not better) than the other common tool used for processing data. I've found that eDiscovery can vary greatly from client to client as to what they would like to see, however you are correct that there are many data points which are standard across load files, even if the end client doesn't see them on the review side of things. I hope I have at least answered some of your questions
  6. Corey I suspect you can accomplish some (if not all) of this currently by using custom columns and then creating an overlay to import back in to the case. I currently have quite a few custom columns that I add to a case before I index any data to avoid the need for reindexing. The parent/child relationship is created when preparing Loadfiles, so I suspect there a way you can leverage this back into the case with an overlay. Sorry I don't have the time at the moment to run any tests and provide you something definitive, but I suspect Jon won't be too far behind in providing something more substantial.
  7. While you could spend much time testing and trying beta drivers or other tricks, I think ultimately the best solution here would be to work through with the Intella Dev guys to find a permanent fix. I've never seen anything remotely like what you are experiencing, but it would seem that from time to time Nvidia and Intella don't play nice together. If you are desperate for a short term fix I would suggest possibly going out and grabbing a cheap video card that with an ATI chip, remove Nvidia from the equation until a suitable fix can be found.....or possibly Beta drivers..
  8. I didn't realise this either, and it's something I've been wanting for ages haha
  9. The parent_tab/child_tag syntax will assist in the short term, however to avoid the need having the ability to simply tick a box and type a 'Parent' name as part of the AutoTag process would be ideal. The changes I'm thinking of are giving the ability to apply parameters to keyword lists for searching and auto tagging. So for comparison, if I want to search across the subject lines ONLY of all emails I do the following steps: Highlight emails in the type-->communications facet, right click and select 'include' select the 'options' button next to the free text search field (top left) and untick all options except 'subject' enter search term in the free text search field and press enter results display matches across emails only in the subject field I can then highlight and tag those results before moving on to the next search term Using that as a basis for what I'm looking for imagine how to make that possible using a keyword list so we can avoid typing in hundreds of individual search terms. When we add a keyword list and then select 'AutoTag' the only option we can change is the tagging rules (item only, including child items or including all family tree items). There will be many uses I think where having the ability to apply the following filters to keyword lists AND have them auto tag would be great: Dedupe/ignore irrelevant Specify fields to search across (subject, text/body, email addresses etc) One tag only per keyword per item/document First occurrence applies within single item (single item has multiple keywords which are responsive, only the first responsive keyword tag is recorded) Highest number of hits on single document (single item has multiple keywords responsive, keyword with the highest number of occurrences is recorded) Just a few thoughts but I'm sure there are other ways we could give some granular control over keyword list auto tagging.
  10. The autotag feature when undertaking keyword list searches is something I use quite a bit, but it would be great to see some further control over how those tags are applied. Lets say I already have a dozen or so tags applied, and a structure built up, if I import a new keyword list and want to auto tag those new tags are going to be interspersed between my existing tags and basically make a mess and trigger my neatness OCD bug To whit, currently we can set the tagging preference only (ie tag the selected item only, tag it's children as well etc..) I would be great to simply have a check box and be able to put that list of new tags into it's own nested parent tag, for example all these new keyword tags would be under a parent tag 'KWS 2' or what ever we want to call it. That way once the search and subsequent tag operation has completed it is very easy to work with that new set of tags. Secondly we can't influence what those new keywords are searching across, it would be another good addition if we could apply exceptions/inclusions or the options filter to the autotag/search process to limit the false positives and the cleanup that we have to undertake after the operation.
  11. This might seem a little trivial considering all we need to do is copy the .xml file to the correct folder in the Appdata location, however once we do that we need to reload the case for it to be visible. If an 'import export template/profile' button existed then this may negate the need to reload the case...?
  12. Expanding on the above query. I'm attempting to have an all encompassing field like the Primary Date, but them manipulating the way the data displays so instead of the full date and time it will just display the UTC offset, say 1000 or AEST for Sydney. In my testing I tried creating a custom field which looked for the email sent record ( PR_CLIENT_SUBMIT_TIME ) field in RAW DATA and also in the Headers field, then changed the date/time format using z(Z) or just z or Z and a few other variations. After re-indexing I cannot get this field to populate with any data at all. I have tried about 10 different date/time formats and even reverted back to the default. I have tried being specific to emails only or 'ANY' in the setup options. I have all the other custom fields setup and working correctly (email importance, sensitivity, read receipt request etc), this is the only one that is causing me problems. Any advice appreciated.
  13. I'm running some load file testing attempting to duplicate load files created with other software and I have a question. Essentially what I'd like to do is have the ability to duplicate the Primary Date column, but have the date format display in a couple of different ways. My thought was to create a custom field, then duplicate the fields which are listed in the Preferences section for Primary Date, however there is not enough information in the Preferences section for me to duplicate the Primary Date field. Is there a file somewhere that shows precisely which fields within Raw Data (or elsewhere) that are being parsed for the Primary Date field?
  14. I'm trying to find out if I can customise the numbering used by Intella when creating load files. currently I get something like below where 'EXPORT' is the prefix which is manually set. EXPORT.00000001.00000001 OR EXPORT.00000001.00000001.00000001 What I would like to do is restrict the amount (currently 8 characters per set) to something like below EXPORT.001.000001 Is there a config file I can edit or somewhere I can customise this? Edit: sorry , re-read the user manual and found the instructions there on how to do this....disregard
  15. Any known issues with report creation? I started a report for all known artifacts (around 300k) and left it running overnight, check on a short time ago and still says it's running but no actual files appearing in the target location. Is this designed to be more targeted using a smaller number of items perhaps?
  16. Ahh I possibly did Igor, I'll revisit that when I get a chance to confirm but I suspect that's what I did
  17. Just loaded up the new version (1.0.2) and wanted to confirm if this is expected behaviour. I tried to add two different E01 images to index, however indexing completed after a few seconds with no data found and no errors. I removed the sources and just added a single image and all is working as expected.
  18. Alternatively you could produce a report of the messages to CSV, then copy out the relevant message you are interested in.
  19. Sorry Vince the issue didn't occur again so didn't chase it any further
  20. Okay some initial feedback. Firstly, I just want to acknowledge that I know this is a first release Beta so some of my thoughts below are likely already on the map, and some are likely far down that map. But my initial excitement about this software wasn't misplaced. I'm extremely impressed and can't wait to see how this develops. I can already see a place for this tool in my day to day work life. Testing Notes I threw in an image of a PC and an iMac just for giggles, I'm guessing at this early stage the concentration has been on support of Windows OS as much less types of artefacts for the Mac was identified, but I was kind of expecting that for such a new bit of gear. Test Machine Specs Core i7 with 64GB ram running Windows 7 x64 Installation Install went smoothly, however did take around 45 mins. I'm assuming that as a first beta release this is pretty low on the priority list and I would expect that to improve and change as the package develops. Case Setup Case setup extremely simple, just a couple of fields to fill in then point to the disc image to ingest. Processing 120 GB - Started at 1800 hrs 25/10/18 1 min – Identified user accounts and other artefacts started appearing after 1 min processing 48 min total processing time 1TB iMac image - Started at 1900 hrs Again within 1 minute I was seeing data and could triage results 1hr 13m total processing time Notes No video or audio ‘open in external application option’ possibly intentional at this stage. Other viewers seem to work for pic and docs Thoughts and Ruminations USB Logs Would be nice to see some other info here if it's possible to show any file movement or access at the same time as the devices are connected The links view shows the user account that was logged in, would be nice to see this in the events view as well, maybe far right side in the boxes for each item? Event log viewer Would be nice to see more information around the event types, maybe another tab next to the ‘properties’ tab when selecting a log. Filter ability to isolate specific types of event logs, possibly addition of auto filter for event logs that might be of common interest ( shutdown/startup, virus scan, windows update, windows restore, restore point creation) Notable Program Usage Expand notable program usage (likely already high on the list) maybe ability to filter here from a predefined list (check box), possibly the ability to add custom programs based on the .exe name. In my head I'm seeing something similar to what IEF use when determine which app artefacts to go looking for. Deleted file activity Would be great to add tab next to ‘properties’ tab to show more information such as which user was logged in at the time, can currently see in the links view only. User Profiles The ability to filter all events based on a user profile, ie build a full timeline of activity for a single user by session linkage. Geolocation Would be nice to have a map with GEO location items (for offline use) AND direct link from the Geolocation field to google maps for online use. Cosmetic Stuff Collapse/Expand all option in search window for facets Create thumbnail pics for video files Data Support Support for mobile phone artefacts like iPhone backups, also to identify those backups which can’t be parsed due to encryption (possibly out of scope but given Intella support already of UFDR files this would seem to be a natural progression) Can UFDR files be imported yet, on the roadmap? Virus scanner logs showing quarantine events, etc Firewall Logs I also noted the picture review is nice and fast, the thumbnail caching works fantastic. Great for onsite triage of pics for LEO. I will spend some quality time over the coming weeks to really dig into this, but this is my initial thoughts after a few hours of playing.
  21. Just quietly I'm excited. Downloaded and started testing on a 120GB disk image, within 1 minute of processing starting I'm able to start triaging and seeing valuable data. I'll withhold any more comments until the indexing process finishes and I can spend a few hours coming up with some constructive testing, but what I've seen in the last 30 minutes or so has me massively impressed. Edit: sorry just one comment, I love the Events view. A good timeline tool has long been something missing and the way this presents the data is exceptional. I'll be watching closely to see how the reporting side of this tool develops, as traditionally this is where it can get tricky. Porting those timelines out into something useful for clients or third parties to use.
  22. I'll check on that next time I'm at that particular clients as I don't have access at the moment.
×
×
  • Create New...