Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 08/12/2012 in all areas

  1. Hi all, we have a question regarding the possibility to activate a two-factor authentication in Intella Connect (we are using v. 2.0.1). Is it possible to configure Intella in this way? is there anyone able to do this that can explain us how to proceede? Thanks in advance. Regards.
    1 point
  2. Information can be found under: https://support.vound-software.com/#knowledge_base/1/locale/en-us/answer/72/
    1 point
  3. HI Shaun, Note that you can increase/decrease these amounts of memory to get to a whole number like your desired 8GB. First drag the slider, then let it go where it is close to the number which you want. Then use the right/left arrow keys to fine tune the amount. Note that you need to actually hold down the arrow key to have the number increase or decrease. In this example I get 7.03GB when I drag the slider. But, now I hold down the left arrow key and the number counts down. I stop at 7GB.
    1 point
  4. Hi Shaun, We have created a development ticket for this. It should be added in a future version.
    1 point
  5. Add the ability to select multiple items using Control +, as opposed to individually selected and accepting each, one at a time.
    1 point
  6. Hi Juliana, If your goal is to use existing responsiveness designation to kick-start the learning process of a Predictive Coding algorithm, then this is certainly possible. As an example: if you already have 1000 items coded as responsive/non-responsive and you wish to create a new PC workflow for a different set of items, say 400, then what you can do is to select all 400 items plus a reasonable chunk of already coded items (ex. 10 responsive and 10 non-responsive) and create new PC review. Intella Connect will build a model for those 420 items and train the model based on the 20 items you already have coded. The result will be a model which should already better differentiate between responsive and non-responsive items for the new data. The relevancy score is not yet a separate field/column that you could see in items table or export, but we are planning to add it soon.
    1 point
  7. Vound is pleased to announce the official release of Intella and Intella Connect 2.4.1. Intella and Intella Connect 2.4.1 are available from the Downloads section in the Vound Support Portal, after logging in with your email address and password. Users with a 2.3.x license need to use the Dongle Manager to update their dongle to the 2.4.x license. Please read the Release Notes before installing or upgrading, to ensure you do not affect any active cases. Highlights Added a top-level Sources tab, adding the ability to (re)index individual sources. Added support for Microsoft Teams. Notable improvements for processing BitLocker images and NSF files. Indexing and case merging/exporting performance improvements.
    1 point
  8. Hi, no this is not supported in Intella for PST exports. The redaction option you have for a PST export is to suppress the redacted item. When exporting to Original format, PDF and load files, you have an option to export the redacted image instead of the native item.
    1 point
  9. I recently looked into Magent Axiom and the Artifact Exchange ( https://www.magnetforensics.com/blog/artifact-exchange-now-open/ ). Is there any way to be able to import an XML report from Axiom and use some of those artifacts?
    1 point
  10. Running a larger environment we at times have the situation multiple Intella instances or other tools are competing for the same resources. If then, for instance, the case folder or the optimization drive runs out of space whilst processing, Intella will fill the log with errors (no space left on device) and report a generic error condition. We also had situations where the case folder location had run out of space, but this remained undetected resulting in a case that was corrupt without us realizing this to be the case. Suggestion: Ensure clear and transparent alerting on filesystem errors that have occurred during processing Spawn monitoring threads that monitor (e.g. sample free capacity every x seconds) available free space on case folder locations and optimization folder locations. If free capacity drops below a configurable threshold, the monitoring threads can pause the running processing and display an alert (send an email?) allowing for the processing to be resumed (assuming space was never actually exhausted). Just an idea...
    1 point
  11. AT&T records are provided in comma delimited .txt files. I believe Verizon records are provided in .csv. Many of the fields already correspond with Intella’s voice/chat fields. The only publicly available tool I’ve found that processes CDR records in any manner is ZetX, but it’s geared more toward cell tower triangulation and less toward the communication analysis. I think this feature would make Intella significantly more valuable for users in law enforcement and the criminal justice system.
    1 point
  12. Hello, We are working on this topic and we are planning to add this functionality to Intella product in the next release. Thank you for your suggestion about Jaccard similarity, this metric is one of the metrics which we are testing to improve our near-duplicates analyzer.
    1 point
  13. My team is performing production import tests. Despite achieving some positive results, we still have some problems: 1. When checking for errors in the "match loadfile fields with Intella" portion, we encounter the following image, suggesting problems with the opt file, even though the preview of the items with redaction appeared to be correct. The loadfile can be proccessed even with these errors. The Map fields options were filled as in the following image. 2. After processing the loadfile, we realize that some items that should have a redacted image in the visualization panel do not have a image to display. All other files had their images preview displayed properly. Are there any particularities regarding the import of items with redaction we are missing? The dat and opt files are available in the attachments. export.dat export.opt
    1 point
  14. Hello Jacques, The following post covers a bit about what you're asking and should get you started:
    1 point
  15. Recently we have had a few customers report that they can not download the Geolite2 database within Intella/Connect. It looks like the vendor for the database has changed the way the database can be accessed, and Intella/Connect can no longer download it. If you need to install the GeoLite2 database, you will now need to firstly download the database, and then install it manually. See the steps below. Sign up for a MaxMind account - https://www.maxmind.com/en/geolite2/signup Go to the downloads area - https://www.maxmind.com/en/accounts/current From the 'GeoIP2 / GeoLite2' section, select the 'Download files' link. Download the GeoLite2 City Binary database. Extract the GeoLite2-City.mmdb file into C:\Users\[USER]\AppData\Roaming\Intella\ip-2-geo-db. Note: You may not be able to see this folder as it is hidden by default. To go directly to the Roaming folder, type %appdata% into the Windows search box, then press the Enter key. Once done, navigate to the \Intella\ip-2-geo-db folder and put the GeoLite2-City.mmdb file in there. Open Intella or Connect and verify that the database is installed. Please see the following video on the above process:
    1 point
  16. Dear All, Important notice: Note that we will be moving to a new support system within the next month. For security reasons you will need to create a new account and password to use on the new support system. More details will be provided in due course.
    1 point
  17. answer to question in: https://support.vound-software.com/#knowledge_base/1/locale/en-us/answer/60
    1 point
  18. Hi, Selective re-indexing is indeed on our roadmap. I see how the change in how items are merged into a case makes sense and how that can be used as a workaround in the interim, so definitely worth looking into!
    1 point
  19. Hi Bryan, At this point the only 'easy' way to show duplicates of a group of items is to do the work around which you are currently doing. This functionality may be expanded in a future version.
    1 point
  20. Hi all, Here are some updates regarding the progress of W4. Where are we at with the official release? We are planning to have our first official release of W4 this week. The installer for the release will be available for download to our beta testers in the next few days. Beta testers will be able to test the new features which have been added since the beta version was released last year. What new features have been included since the beta release? There have been a number of new features added since the beta version. The new features can't all fit into one post, so over the next few days we will post some of the new features that have been added to W4. That said, here is a short list of what we have added: Reporting wizard which allows for a lot of flexibility when creating forensic reports Ingest a W4 case into Intella Colorized tags for easier tag identification Special Note function. This is useful for adding additional information to discovered artefacts New type of visualization in the Summary tab Thumbnail view for image files Email headers tab
    1 point
  21. Hi Kalin, Re APFS support. This is high on our do to list. We are just waiting for the the functionality to become available. Re thumbnails. We are looking to add a reporting wizard to Intella. This should include the mechanics to export images as thumbnails. Having thumbnails for other file types is a good idea, i will make a ticket for that.
    1 point
  22. Hi Bryan, It's true that the output of CMD when processing tasks could be improved, however there is also another option available. Instead of analyzing the output in the console, you might preffer to open case logs and monitor the progress there. Here is a snippet showing when OCRing is starting, progressing and finishing: [INFO ] 2019-03-28 13:40:07,100 [CrawlThread] Total page count: 101 [INFO ] 2019-03-28 13:40:07,109 [CrawlThread] Started OCRing 101 items. Using: ABBYY FineReader Engine [INFO ] 2019-03-28 13:40:07,109 [CrawlThread] Settings: Profile: Accuracy Export format: Plain text Languages: English Number of workers: 10 Detect page orientation: true Correct inverted images: true Skip OCRed: true [WARN ] 2019-03-28 13:40:07,115 [CrawlThread] Skipped encrypted content item: 1373 [INFO ] 2019-03-28 13:40:07,116 [OcrServiceProcessor1] OCRing item: 1243 [INFO ] 2019-03-28 13:40:07,116 [OcrServiceProcessor2] OCRing item: 1244 ... [INFO ] 2019-03-28 13:40:32,470 [CrawlThread] Collecting OCR crawl results [INFO ] 2019-03-28 13:40:32,619 [CrawlThread] Collected 0 records. [INFO ] 2019-03-28 13:40:32,620 [CrawlThread] Importing OCRed text and extracted entities [INFO ] 2019-03-28 13:40:32,889 [CrawlThread] Imported OCR text into 150 items. [INFO ] 2019-03-28 13:40:32,938 [CrawlThread] Updating OCR database [INFO ] 2019-03-28 13:40:33,182 [CrawlThread] Finished OCR. Total time: 0:26. Items processed: 99 You could of course monitor the entire log, or perhaps use some command line programs to grep their contents live for regular expressions of your choosing. That way you can only get information about OCR process itself. As for the second question about preserving temporary files generated during OCRing. It looks like a risky operation for me and if one is not careful enough, it may produce errors which would be very hard to find. Fortunately, it shouldn't be needed once we extend Intella so that it re-applies OCRed text to duplicated items discovered when new sources are being added. This is already on our radar.
    1 point
  23. Server side profiles for users are subject of another feature that we have on our roadmap. As for now they will be stored in browser's storage, so it will be tied to the browser that user is using. Good comment though, it may make me bump the priority of those persistent user profiles in the next release.
    1 point
  24. Hello Bryan, Please try running the installer like this: setup-intella...exe /S It will run the installer in the background and install Intella in the default location. Some windows will still briefly open and close when certain settings are made, but no user interaction is necessary. Note: we have not tested this switch a lot and therefore we do not officially support it. It worked fine on my system though and I am quite confident that it will work on other systems.
    1 point
  25. We raised this requirement before too. It would be critical for Intella use the SLACK API with Legal-Hold privileges to select and pull data from Slack. Slack has become very big. So, count our vote on this too please. For API reference see: https://api.slack.com/
    1 point
  26. Has anyone successfully imported a Slack Enterprise messaging archive into Intella? It is a json format. Thanks for the help.
    1 point
  27. I had been thinking a bit about this question and wanted to throw out an alternative approach. Of course, it's correct that Lucene does not directly support proximity searches between phrases. However, as has been previously mentioned in a pinned post, it does allow you to identify the words included in those phrases, as they appear in overall proximity to each other. Thus, your need to search for "Fast Pace" within 20 words of "Slow Turtle" should first be translated to: "fast pace slow turtle"~20 . This search will identify all instances where these 4 words, in any order, appear within a 20 word boundary anywhere in your data set. Then, with this search in place, you can perform an addition search, applied via an Includes filter, to include your two specific phrases: "fast pace" AND "slow turtle" By doing this, you should be left with a very close approximation of the exact search you initially intended, with your results filtered to only show your exact phrase hits, but within the overall proximity boundary previously specified. Hope that helps!
    1 point
  28. Hi John, That's strange though because we kept on searching and we found that we were able to use RegEx to search for properties using a different syntax in the search bar. If we surround the RegEx with a leading and a trailing forward slash "/", the RegEx expression also found hits in the properties.
    1 point
  29. I think what Todd is likely referring to is a Relativity-centric concept rooted in the so-called search term report (STR), which calculates hits on search terms differently than Intella. I know I have communicated about this issue in the past via a support ticket, and created such a report manually in Intella, which is at least possible with some additional effort involving keyword lists, exclusion of all other items in the list, and recording the results manually. What the STR does is communicate the number of documents identified by a particular search term, and no other search term in the list. It is specifically defined as this: Unique hits - counts the number of documents in the searchable set returned by only that particular term. If more than one term returns a particular document, that document is not counted as a unique hit. Unique hits reflect the total number of documents returned by a particular term and only that particular term. I have been aware of this issue for years, and although I strongly disagree regarding the value of such data as presented in the STR (and have written about extensively to my users), the fact is that, in ediscovery, groupthink is extremely common. The effect is that a kind of "requirement" is created that all practitioners must either use the exact same tools, or that all tools are required to function exactly the same (which I find to be in stark contrast to the forensics world). I actually found myself in a situation where, in attempting to meet and confer with an opposing "expert," that they were literally incapable of interpreting the keyword search results report we had provided because it was NOT in the form of an STR. In fact, they demanded we provide one, and to such an extent that we decided that the most expedient course of action was just to create a new column that provided those numbers (whether they provided any further insight or not). So in responding to Jon's question, I believe the answer is NO. In such a case, within the paradigm of the STR, a document that contains 5 different keywords from the KW list would actually be counted ZERO times. Again, what the STR does is communicate the number of documents identified by a particular search term, and no other search term in the list. I think it's a misleading approach with limited value, and is a way to communicate information outside of software. Further, and perhaps why it actually exists, is that it sidesteps the issue of hit totals in columns that add up to more more documents than the total number identified by all search criteria. In other words, it doesn't address totals for documents that contain more than one keyword. This is in contrast to the reports Intella creates, where I am constantly warning users not to start totaling the columns to arrive at document counts, as real world search results almost inevitably contain huge numbers of hits for multiple terms per document. Instead, I point them to both a total and unique count, which I manually add to the end of an Intella keyword hit report, and advise them that full document families will increase this number if we proceed to a review set based on this criteria. Hopefully that clarified the issue and provided a little more context to the situation! Jason
    1 point
  30. Just quietly I'm excited. Downloaded and started testing on a 120GB disk image, within 1 minute of processing starting I'm able to start triaging and seeing valuable data. I'll withhold any more comments until the indexing process finishes and I can spend a few hours coming up with some constructive testing, but what I've seen in the last 30 minutes or so has me massively impressed. Edit: sorry just one comment, I love the Events view. A good timeline tool has long been something missing and the way this presents the data is exceptional. I'll be watching closely to see how the reporting side of this tool develops, as traditionally this is where it can get tricky. Porting those timelines out into something useful for clients or third parties to use.
    1 point
  31. Just wanting to revisit a wish I had from 2015 to bring it back to life. The timeline view for intella, currently we can't do anything except export to PNG graphic file. Adding the ability to export to HTML or Excel would be a huge benefit. I'm constantly asked for timeline graphs/presentations from clients and have to resort to looking at other Analytics tools which are not exactly built for simple timelining, although they do an admirable job it seems a pity to waste the perfect timeline already showing in Intella.
    1 point
  32. I guess in the future I could select each of the individual MBOXs from the IMAP collection except the ALL MAIL MBOX, index the collection, and then add the ALL MAIL MBOX in as a second step. Anything that was a duplicate in ALL MAIL would be duped out. As a workaround, I showed the "duplicates" column in the listing pane, sorted based on location and tagged for export any item in the ALL MAIL location that did not show a duplicate, but did not tag any item that did show a duplicate. All other relevant items from other Gmail 'folders' were tagged and all tagged items were exported.
    1 point
  33. In the ediscovery world, we are bombarded by both vendors and developers heralding the promise of advanced text analytics capabilities to effectively and intelligently reduce review volumes. First it was called predictive coding, then CAR, then TAR, then CAL, and now it's AI. Although Google and Facebook and Amazon and Apple and Samsung all admit to having major hurdles ahead in perfecting AI, in ediscovery, magical marketing tells us that everyone but me now has it, that it's completely amazing and accurate and that we are Neanderthals if we do not immediately institute and trust it. And all this happened in a matter of months. It totally didn't exist, and now it apparently does, and these relatively tiny developers have mastered it when the tech giants have not. Back in reality, I have routinely achieved with Intella that which I'm told is completely impossible. As Intella has evolved its own capabilities, I have been able to continually evolve my processes and workflows to take full advantage of its new capabilities. As a single user, and with Intella Pro, I have been able to effectively cull data in data sets up to 500 GB into digestible review sets, from which only a far smaller number of documents are actually produced. PSTs, OSTs, NSFs, file share content, DropBox, 365, Gmail, forensic images - literally anything made up of 1s and 0s. These same vendors claim I can not and should not be doing this, it's not possible, not defensible, I need their help, etc. My response is always, in using Intella with messy, real-world data in at least 300 litigation matters, why has there not been a single circumstance where a key document in my firm's possession has ever been produced by an opposing party, that was also in our possession, in Intella, but that we were unaware of? Of course, the answer is that, used to its fullest, with effectively designed, iterative workflows and QC and competent reviewers, Intella is THAT GOOD. In the process, I have made believers out of others, who had no choice but to reverse course and accept that what they had written off as impossible was in fact very possible, when they sat there and watched me do it, firsthand. However, where I see the greatest need for expanded capabilities with Intella is in the area of more advanced text analytics, to further leverage both its existing feature set, and the quality of Connect as a review platform. Over time, I have seen email deduplication become less effective, with the presence of functional/near duplicates plaguing review sets and frustrating reviewers. After all, the ediscovery marketing tells them they should never see a near duplicate document, so what's wrong with Intella? You told us how great it is! The ability to intelligently rank and categorize documents is also badly needed. I realize these are the tallest of orders, but after hanging around as Intella matured from version 1.5.2 to the nearly unrecognizable state of affairs today (and I literally just received an email touting AI for law firms as I'm writing this), I think that some gradual steps toward these types of features is no longer magical thinking. Email threading was a great start, but I need improved near duplicate detection. From there, the ability to identify and rank documents based on similarity of content is needed, but intelligently - simple metadata comparison is no longer adequate with ever-growing data volumes (which Intella can now process with previously unimaginable ease). So that's my highest priority wishlist contribution request for the desktop software, which we see and use as the "administrative" component of Intella, with Connect being its review-optimized counterpart. And with more marketing materials touting the "innovative" time saving of processing and review in a unified platform, I can't help but think to respond, "Oh - you mean like Intella has been from very first release of Connect?" Would love to hear others share their opinions on this subject, as I see very little of this type of thing discussed here. Jason
    1 point
  34. Hi Todd, yes we are seeing this in many cases now, typically where documents have been image scanned and therefore the digital metadata needs to be explained. The client just wants to read the docs in chronological order, they are happy to have a team of admin clerks viewing every document and gathering the 'actual date' rather than the data it was scanned but then there is nothing that can be done to get this back into Intella. We've logged a mail with support so fingers crossed for the future.
    1 point
  35. I'm piggy-backing off gjennings post in the other forum titled Adding New Data Fields. I couldn't find an actual request for this feature in the Wishlist forum, so I'm adding it here just to make it official. This is hands-down the #1 item on my Intella Wishlist. I come across issues in nearly every case that could be resolved much more easily if we could import data into custom columns (rather than tags). Thanks.
    1 point
  36. For the moment that is indeed not possible. Please note that the Table does have a Message ID column. So you can show the Message IDs and sort on them. If you have a large amount of Message IDs to deal with, you can try the following: List all items in the table and add the Message ID column. Export all results as a CSV, using only the Item ID and Message ID columns. Use Excel or some batch script to filter the CSV so that it only contains the rows with a matching Message ID. Remove the Message ID column from the CSV, so leaving only the Item IDs. Import this file in the Item ID facet. This gives you the set of items with a matching Message ID.
    1 point
  37. I would love to be able to search the message-ID field specifically, and do so via a keyword list. What I am trying to do is find specific messages, but not the messages that reference to them. I would like to be able to have a keyword list that looks like: messageID:<ABC123> messageID:<ABC456> etc... Is there currently a way to search this specific field only?
    1 point
  38. When processing data from systems and mobile devices one very often finds file-based databases and data structures. Most popular is SQlite, but there exists others as well (Microsoft EDB, and one could probably even consider plist files to fall into this category). The (table-)structure of these files is application-specific, i.e., varies widely. My proposal would be to create a template format that allows for two things: Template-based specification of (SQL) queries. The query results would then be represented as items in Intella (either per line or by SQL 'GROUP') Definition of mappings of query result fields into custom columns (including type specification, e.g., date, GEO-location coordinates, String, Integer etc.) Allowing people to share their templates for the various applications (and versions thereof) that they have created templates / parser for, would enable the building of a library. The advantage would be that otherwise missed information can be added to event time lines and app-specific GEO-location data to be extracted and identified.
    1 point
  39. Hi rodrigoalmeida, No, there is no 'soft license' available. Intella/Connect can only be used with a USB license dongle.
    1 point
  40. Information can be found under: https://support.vound-software.com/#knowledge_base/1/locale/en-us/answer/71/
    1 point
  41. Answer can be found under: https://support.vound-software.com/#knowledge_base/1/locale/en-us/answer/70
    1 point
  42. We have recently considered a new deployment scenario for CONNECT. It turned out not to be viable as it would require purchase of many more Microsoft server CALs and other Microsoft licenses at significant cost. Hence I wanted to raise the question what it would take to have the CONNECT server run in Linux instead of Windows (excluding index creation)? As it is a Java application it would seem to be portable (possibly with loss of functionality such as PST creation). Any thoughts?
    1 point
  43. Can you make one of those fields required? For example: Relevant yes no requires supervisor's attention That could help. If I'm not mistaken it could even help to properly recompute existing batches (the ones which haven't been finished yet) if you then code one shared item inside them. This is just my evaluation by looking at the code, so please do not attempt this unless you have a backup or using a test case.
    1 point
  44. Thanks for the input from you both. I have known for years of the database structure of the PST/OST files and have always chuckled a bit at the concept of exporting to native/original given the originating file type. I too would like to see the flexibility to export to .eml or .msg in future releases. In the long run I guess it is just what the client asks for (or what we know they need but they haven't asked for in so many words) that counts.
    1 point
  45. Hi, just realized that the "post-processing steps" cause a 20% usage of the GPU. Would a nice GPU result in performance improvement? Which process steps are affected by the GPU? Thx a lot
    0 points
  46. Hi Kalin, LDAP is currently only being used for Authentication, not Authorization. We decided to keep our Authorization configuration on the Connect side, so that the integration with AD/LDAP wouldn't be overly complicated. The level of automation you are seeking is not something that can be achieved in the current version of our software. I would love to hear from other users too if this is something they would like to see being added, though. CLI/CMD support is currently a PRO/Team specific feature. We are planning to add more automation to Connect in next few release cycles, but we are more leaning towards developing some sort of RESTful API. Again, any feedback from the community about this would be appreciated.
    0 points
×
×
  • Create New...