Week 3 – 2016

Super busy week this week! Here we go

    1. Berla updated their car infotainment analysis software with a host of new features. A majority of the updates appear to relate to usability, data presentation and formatting. There’s also updates to data parsing for Ford Sync Gen1V5 and Toyota.
      Beta versions of the iVe Mobile and Connect software for Android was also introduced. I think previously the mobile app only worked on iOs. The app allowed users to determine if iVe supported a car model, how to access the data and also connect with the iVe desktop software (via iVe connect) to start cases, monitor progress of acquisitions and view results.
      Download Link
      iVe v1.8 Released and iVe Mobile

    2. Matt over at 505Forensics has released a new python script to parse Windows 10 prefetch files. The tool was designed to work on multiple OS’s rather than rely on using Windows API calls. The blog post also lists a few future improvements; I’m definitely interested in seeing additional output formats (ie TLN) so that I can integrate it into my existing timeline process. Thankfully the source is located on Github, so I’m sure the author  won’t mind the additional hands in updating it.
      Script Release: Parsing Windows 10 Prefetch Files on Linux
    3. Harlan’s been busy this week with 4 posts on his blog. The first relates to the null character that prepends some values in the registry as a persistence mechanism. As a result Harlan wrote a new regripper plugin.

      Harlan’s second post of the week covered the importance of infrastructure monitoring to locate attackers in a network (as they’re trying to get in or when they’re already hunting around) using sysmon. He also shared some links regards Windows Events logs that people might find useful. I agree with his points about analysis of prefetch data, however I do like that there are additional tools available to parse the data. I’ve found having multiple tools parsing the data can showcase the intricacies of the data being examined. Tool writers don’t all have access to the same dataset, and operating systems like to behave strangely when different conditions are met; so the more the merrier! Especially because some tools will provide some output without parsing all the available data (most probably because the author didnt know about the additional artefacts, or just general unreporting parsing errors).

      The third post of the week relates to analysis. Harlan has repeatedly asked questions about what the community needs, and how can we as analysts advance the field. To his credit he keeps pushing out information, and trying to engage with his readership; the tone of this blog seems to express that he hasn’t had much engagement. I think it’s because people don’t know what they need. They may not know that they’re not analysing the data available. Or they may just finish their day job and stop thinking about the state of the industry. Harlan is one of the few people that is continuously looking for ways to improve both his analysis process as well as those in the overall community. The main things I took away from this post are that people are parsing the artefacts, but not necessarily knowing how to interpret the results, and consequently may come to incorrect conclusions. The final paragraph key to this post, “What resources do we have available now, and what do we need?”. Honestly I’m not sure what we need. Training can be difficult to attend and obtain funding for; but what I really like is having a forensic image with a defined set of artefacts and a predetermined set of actions that have been performed. As a result the examiner can go through and see how their tools parse the artefacts and write a report that shows their conclusions. Something like the cyber forensics challenge or Corey Harrells Triage Practical. Challeneges like these allow people to download a file, hive, timeline etc and try to come to a conclusion. David Cowen during his year of blogging had a weekly challenge where he posed questions and the winners received prizes. I get the feeling that when the prize is good enough, engagement increases. It definitely got me to do a bit more research.

      And the final post (as of this posting, however I’m not convinced he’s done for the week yet!) relates to training; mainly about conducting a training needs analysis, as well as evaluating the results of the training from both a management and employee perspective.
      More Registry Fun
      Resources, Link Mashup
      Analysis
      Training

    4. Eric Zimmerman has also had a busy week with 3 posts;
      The first was in response to Harlans post about the Null character; he tested his forensic explorer tool to see if it handles the null character as well; which his post shows it does.
      The next two posts relate to the release of his own prefetch parser (written in C#). This tool has both a GUI and commandline version; if it’s as good as Shellbag Explorer (and all of Erics other work for that matter) then this will be a fantastic resource. These tools utilise the Windows API to decompress the prefetch files so will have to be run on Windows 8+.
      For those interested I’d highly recommend watching this weeks forensic lunch
      Registry values starting with a NULL character
      Windows Prefetch parser in C#
      Introducing PeCmd

    5. Forensic Lunch 22nd January 2016! This weeks forensic lunch was hosted and posted by David Cowen and covered a few topics.
      • Hal Pomeranz spoke about his Linux memory grabber tool and the updates that he’s made to it. This tool looks really good for those that come across Linux machines in their daily work as it not only dumps memory but creates the appropriate volatility profile for later examination (as well as dumping some other files such as the bash history). The tool can be found here.  One of the interesting things Hal showed was how a user can correlate an the shell data from the memory capture with the bash history and obtain the times the commands are run; which I’m sure people will find helpful to put in their timelines.
      • Eric Zimerman explained how his prefetch parser came about. He also showcased how it works with some examples. Relating this to my comments about Harlan’s second blog post, Eric worked to share his datasets  with community to make sure that tools were able to parse the same data correctly.
      • Matthew ran us through the HFS+ journal parser usage which will be a very useful reference should people need to use it!
        Forensic Lunch 1/22/16
    6. The Forensic 4Cast awards nominations are now open! The awards are held during the Forensic Summit in Austin Texas.*
      2016 Forensic 4:cast Awards – Nominations are Open
    7. Plaso has updated to version 1.3 with a host of new features: Parsing $MFT, USN journal, client-local SCCM logs and the ProgramsCache registry key will be welcome additions. Not entirely sure what a docker file is but that’s been included too. They’ve also turned on file hashing and the status information view by default.  There’s also an additional output module for XLSX.
      The new release may not be able to process files created with previous versions. As part of my process I generally will store the tools that I use within my case; that way I should be able to go back and run the same commands to repeat my output. There are a few other things that broke in this release as well as the plans for the future, and the link to the blog post can be found below.
      On a sad note the original developer of the tool Kristinn has decided to step down as the project lead and passed it on to Daniel White.
      Plaso has come a long way since Kristinn developed Log2timeline in 2009 and it’s been a phenomenal achievement. Well done Kristinn and I look forward to seeing what you move onto.
      Sprinkling morning dew and summer sunlight – Plaso 1.4 Freya released!
    8. Precise language is something that those dealing with lawyers know all too much about. The difference between two words may be inconsequential to regular folk, but make all the difference to key decision makers; whether they be a judge or CEO. Malware Jake runs through part 2 of his analysis of the lawsuit against Trustwave for improper IR covering the language used in reporting.
      In IR reporting, precise language matters

    9. In another post about certification/training; the latest blog post over at Hexacorn covers the authors suggestion that all IT security specialists should cover the CISSP material. Whilst I don’t work in security directly, I definitely see the need for people in charge of computer security studying material vetted by other professionals rather than trying to reinvent the wheel.
      Why you should sit and study for CISSP

    10. The LCDI team over a Champlain has finished their Raspberry Pi cyber project and released their final report. The project assessed the Raspberry Pi as a potential device used as a honeypot. The team found that the Pi’s are a cost effective and easy solution to create honeypots however tools like NMAP will also detect that this is so.
      Raspberry Pi Cyber Final Report
    11. Those in need of an assembly replay tool should check out the link on TrewMTE’s blog. If you have the x86 assembly code you can throw it into the simulator (either online or locally) and watch the program process through memory.
      Malicious Code – training simulator
    12. Lastly Brian Baskin over at Ghetto Forensics has a post up about quickly creating a malware sandbox using Noriben. As mentioned in the post, not everyone is able to upload their malware to online sandboxes so following Brian’s instructions a user should be able to roll their own quickly. I can’t say I’ve had to do too much malware analysis but when I do I will come back to this post!
      Creating a Malware Sandbox in Seconds with Noriben.

 

And that’s all for Week 3! If you think I’ve missed something, or want me to cover something specifically let me know at randomaccess3+thisweekin4n6 at gmail dot com.

*No this isn’t a hint to be nominated, there are those more worthy and I’m only just starting this.

Leave a comment