This the multi-page printable view of this section.Click here to print.
Reading a file into a bash script on a per-line basis is pretty simple:
1while read -r line; do command "${line}"; done < src.file
here we read a line from stdin and as long as a line was read the line is passed to command as a single command argument.
the -r
option prevents backslash escapes in the line being interpreted.
If you want to prevent leading or trailling whitespace from being removed from the line then add
IFS=
to the line:
1while IFS= read -r line; do command "${line}"; done < src.file
The above is useful when writing something quick on the command line, but in a script it's more customary to write this across multiple lines.
What I tend to do is wrap the while loop within ( )
with the input redirect after the parentheses.
To me this makes it clearer on what the bounds of the script relates to the input.
For example, here we have a script which writes the content of a file to stdout but adds line numbers to the output:
When run against itself:
cut OPTION... [FILE]...
Print selected parts of lines from each FILE to standard output.
With no FILE, or when FILE is -, read standard input.
Mandatory arguments to long options are mandatory for short options too.
Use one, and only one of -b, -c or -f. Each LIST is made up of one range, or many ranges separated by commas. Selected input is written in the same order that it is read, and is written exactly once. Each range is one of:
To get the last field in a line using cut is simple, just wrap cut with a pair of rev commands and then extract the first field:
1rev | cut -f1 -d' ' | rev
This works as rev simply reverses the characters of each line, pipe it through cut which extracts the required fields (counting in reverse), and then reverses it again so the content is in the original order.
The only thing you have to remember is that, once reversed, field 1 is actually the last, 2 the one before the last and so on.
In most script languages, a pipe is where the output of one command is passed as the input of a second command. It allows the chaining of multiple commands without the need of intermediary files.
Every command has 3 files open by default:
File # | Name | Script | Description | ||
---|---|---|---|---|---|
Read | Create | Append | |||
0 | stdin | < | Input into the command | ||
1 | stdout | >&1 | >>&1 | Output from the command | |
2 | stderr | >&2 | >>&2 | Errors from the command |
A Here document is a set of lines in a script which is to be sent to a command as it's input.
It consists of <<deliminator
followed by multiple lines and then a line with just
demiminator
to show the end of the document.
For example, we have a 4 line document that will be sent to the command as it's standard input:
1command <<TERMINATOR
2line 1
3line 2
4line $someVar
5line 4
6TERMINATOR
See also: Here strings
Here strings are similar to Here documents but allows you to do this on one line:
1command <<<"some string"
For example, the following commands do the same thing, count the number of words in a string:
1echo "This is a test" | wc -w
2
3wc -w <<<"This is a test"
You can also pipe the output of commands as a here string:
1ps -fe | wc -l
2
3wc -l <<<$(ps -fe)
See also: Here document