Module 8.1

C++ File I/O

Learn how to read from and write to files in C++. Master the fstream library to handle text files, binary files, and build robust file-handling applications that persist data beyond program execution!

45 min read
Intermediate
Hands-on Examples
What You'll Learn
  • fstream, ifstream, ofstream classes
  • Reading text files line by line
  • Writing and appending to files
  • Binary file operations
  • File positioning and seeking
  • Error handling and best practices
Contents
01

Introduction to File I/O

File I/O (Input/Output) allows your programs to persist data beyond their execution. Whether saving user preferences, logging events, or storing application data, file handling is essential for building real-world applications.

What is File I/O?

Imagine you're building a game that needs to save player progress, or an application that stores user settings. Without file I/O, all data would be lost when the program ends. File operations let you read data from files (input) and write data to files (output), creating persistent storage for your applications.

Concept

File Stream

A file stream is a sequence of bytes flowing between your program and a file. C++ treats files as streams of data - you can read from them (input stream) or write to them (output stream).

The <fstream> header provides three main classes for file operations:

ifstream - Input file stream (reading), ofstream - Output file stream (writing), fstream - Both input and output operations.

The fstream Header

To work with files in C++, you need to include the <fstream> header. This provides three stream classes, each designed for specific operations:

ifstream

Input file stream. Use when you only need to read from a file.

ofstream

Output file stream. Use when you only need to write to a file.

fstream

Both input and output. Use when you need to read AND write.

#include <fstream>  // Required for file operations
#include <iostream>
#include <string>

int main() {
    // Declare file stream objects
    std::ifstream inputFile;   // For reading
    std::ofstream outputFile;  // For writing
    std::fstream file;         // For both
    
    return 0;
}

File Stream Lifecycle

Working with files follows a simple pattern: open the file, perform operations (read/write), and close the file. Modern C++ makes this even easier with RAII - the file automatically closes when the stream object goes out of scope.

// Method 1: Open in constructor (recommended)
std::ofstream file("data.txt");  // Opens file immediately

// Method 2: Separate open call
std::ofstream file2;
file2.open("data.txt");

// Always check if the file opened successfully
if (!file) {
    std::cerr << "Error: Could not open file!" << std::endl;
    return 1;
}

// Perform file operations here...

// Close the file (automatic when object is destroyed)
file.close();  // Optional but explicit
RAII (Resource Acquisition Is Initialization): When a file stream object goes out of scope, its destructor automatically closes the file. This prevents resource leaks even if an exception occurs.
02

Writing to Files

Writing data to files lets you save information that persists after your program ends. Whether it's configuration settings, logs, or user data, ofstream makes it simple and efficient. Master file writing to create lasting applications that store and manage data beyond program execution.

Understanding File Output

File writing (output) is the process of transferring data from your program's memory to permanent storage on disk. Unlike console output with cout which displays text temporarily on the screen, file writing creates persistent data that survives program termination, system reboots, and can be shared across different applications and platforms.

File Writing (Output Stream)

File writing is the mechanism by which programs save data to disk storage. In C++, this is accomplished through output file streams (ofstream) which provide a high-level, type-safe interface for writing various data types to files. The stream abstraction handles buffering, formatting, and low-level system calls, allowing you to focus on what data to write rather than how to write it.

Key Characteristics:

  • Persistence: Data survives program termination
  • Sequential: Data is written in order from beginning to end
  • Buffered: Data is cached in memory before being written to disk for efficiency
  • Type-safe: Stream operators handle type conversion automatically
  • RAII-managed: Files are automatically closed when streams go out of scope
When to Write Files
  • Saving user preferences and settings
  • Logging application events and errors
  • Exporting reports and data analysis
  • Creating configuration files
  • Storing game progress and player data
  • Generating HTML, XML, or JSON documents
  • Database dumps and backups
  • Recording transactions and audit trails
Common Pitfalls
  • Forgetting to check if file opened successfully
  • Accidentally overwriting important existing files
  • Not flushing buffers before program termination
  • Writing to files without proper permissions
  • Ignoring disk space limitations
  • Not handling special characters in filenames
  • Failing to close files explicitly in critical sections
  • Writing sensitive data without encryption

Basic File Writing

Writing to a file with ofstream (output file stream) is as intuitive as writing to cout. You use the same << insertion operator - the only difference is the destination. The stream handles all formatting, buffering, and low-level I/O operations automatically, allowing you to focus on your data rather than the mechanics of file operations.

The Power of Stream Abstraction: The same << operator works for console output (cout), file output (ofstream), and string streams (ostringstream). This unified interface is a cornerstone of C++ I/O, making it easy to redirect output without rewriting code. A function that writes to ostream& can work with any output destination!
#include <fstream>
#include <iostream>

int main() {
    // Create and open a file for writing
    std::ofstream outFile("example.txt");
    
    // Check if file opened successfully
    if (!outFile) {
        std::cerr << "Error opening file!" << std::endl;
        return 1;
    }
std::ofstream is the output file stream class. The constructor creates (or overwrites!) "example.txt" in the current directory. Always check if (!outFile) to verify success. Common failures include: insufficient disk space, write permissions denied, or invalid file path. Without this check, writing to a failed stream silently does nothing - your data vanishes!
    // Write to the file (just like cout!)
    outFile << "Hello, File!" << std::endl;
    outFile << "This is line 2." << std::endl;
    outFile << "Number: " << 42 << std::endl;
    outFile << "Pi: " << 3.14159 << std::endl;
The insertion operator << works identically to std::cout. You can write strings, numbers, or any type that supports stream output. std::endl writes a newline and flushes the buffer, ensuring data reaches the disk immediately. Each << operation appends to the current write position. This syntax makes file writing feel natural if you're already familiar with console output.
    // File automatically closes when outFile goes out of scope
    std::cout << "Data written successfully!" << std::endl;
    
    return 0;
}
RAII (Resource Acquisition Is Initialization) automatically closes the file when outFile goes out of scope. No explicit close() needed! The destructor flushes any remaining buffered data and releases the file handle. This guarantees proper cleanup even if exceptions occur, preventing data corruption and resource leaks.

After running this program, example.txt will contain:

Hello, File!
This is line 2.
Number: 42
Pi: 3.14159
Warning: By default, ofstream will overwrite existing files! If "example.txt" already exists, its contents will be replaced. Use append mode to add to existing files.

Appending to Files

Imagine you're maintaining a daily journal file. Each day, you want to add new entries without erasing what you wrote yesterday. Or consider an application log where each program run should add new events without destroying the history. This is where append mode becomes essential - it allows you to add new content to the end of existing files while preserving everything that's already there.

Append Mode (std::ios::app)

Append mode is a file opening mode that positions the write pointer at the end of an existing file, allowing new data to be added after existing content rather than replacing it. When you open a file with std::ios::app, the file is preserved, and all write operations automatically go to the end, regardless of any seek operations.

How Append Mode Works:

  1. File exists: Opens the file and moves the write pointer to the very last byte
  2. File doesn't exist: Creates a new file (same as normal mode)
  3. Every write: Automatically positions at the end before writing
  4. Content preservation: All existing data remains untouched
Append Mode vs Normal Mode
Normal Mode: Opens → Erases all → Writes from start
Append Mode: Opens → Keeps all → Writes at end
Perfect Use Cases
  • Application log files (errors, events, debug info)
  • Transaction records and audit trails
  • Daily journals and timestamped entries
  • Cumulative data collection (sensor readings)
  • Chat message histories
  • Server access logs
Important Distinction: Append mode (std::ios::app) is different from opening a file, manually seeking to the end, and writing. With append mode, every write operation is guaranteed to go to the end, even if another process is also writing to the same file. This makes append mode the safe choice for log files that multiple processes might access simultaneously.

Opening a File in Append Mode

To activate append mode, pass std::ios::app as the second parameter when constructing the ofstream object. This single flag changes the entire behavior from destructive overwriting to safe content preservation.

#include <fstream>
#include <iostream>

int main() {
    // Open file in append mode
    std::ofstream logFile("log.txt", std::ios::app);
The second parameter std::ios::app is crucial - it tells the stream to append instead of overwrite. Without it, ofstream defaults to truncating (erasing) the file! Append mode positions the write pointer at the end of existing content, ensuring new data is added after what's already there. If the file doesn't exist, it's created just like normal mode.
    if (!logFile) {
        std::cerr << "Error opening log file!" << std::endl;
        return 1;
    }
    
    // New data is added at the end
    logFile << "[2026-02-03 10:30:00] Application started" << std::endl;
    logFile << "[2026-02-03 10:30:05] User logged in" << std::endl;
Even in append mode, always check if the file opened successfully. Once verified, every write operation adds to the end of the file. This pattern is perfect for log files where you want to accumulate entries over time without losing history. Each time the program runs, new log entries are added after the existing ones. The timestamp format helps track when each event occurred.
    std::cout << "Log entries added!" << std::endl;
    
    return 0;
}
Append mode is essential for: application logs, audit trails, cumulative data collection, and any scenario where you need to preserve existing file contents. Common use cases include error logs, transaction records, and incremental backups. Just remember: append mode adds to the end - you can't insert in the middle or overwrite specific lines without rewriting the entire file.

Writing Different Data Types

You can write any data type that works with cout. Let's save structured student data in CSV format.

#include <fstream>
#include <iostream>
#include <string>
#include <vector>

struct Student {
    std::string name;
    int age;
    double gpa;
};
We define a Student struct to organize related data. Using structs makes code cleaner and more maintainable than juggling separate variables. CSV (Comma-Separated Values) is a universal format that Excel, Google Sheets, and databases can easily import. It's human-readable and simple to generate with C++.
int main() {
    std::vector<Student> students = {
        {"Alice", 20, 3.8},
        {"Bob", 22, 3.5},
        {"Charlie", 21, 3.9}
    };
    
    std::ofstream outFile("students.txt");
    
    if (!outFile) {
        std::cerr << "Error opening file!" << std::endl;
        return 1;
    }
We initialize a vector with sample student data using C++11 brace initialization. Vectors are perfect for collections that need to be iterated and saved. Opening the file is standard - create an ofstream and check for errors. If this succeeds, we're ready to write all student records.
    // Write header
    outFile << "Name,Age,GPA" << std::endl;
Best practice: Always write a header row for CSV files! It describes what each column contains, making the file self-documenting. When someone (or another program) opens the file, they immediately understand the data structure. Without headers, readers must guess what each column represents.
    // Write each student (CSV format)
    for (const auto& student : students) {
        outFile << student.name << ","
                << student.age << ","
                << student.gpa << std::endl;
    }
The range-based for loop iterates through all students. For each one, we write fields separated by commas: name,age,gpa. The << operator handles type conversion automatically - integers and doubles are converted to text. std::endl ends each row, creating one line per student. This format matches standard CSV conventions.
    std::cout << "Saved " << students.size() << " students!" << std::endl;
    
    return 0;
}
We confirm success by displaying the count. The file automatically closes when outFile goes out of scope. This CSV file can now be opened in Excel, imported into databases, or parsed by other programs. For more complex CSV needs (quoted fields, embedded commas), consider using a CSV library, but this simple approach works great for basic structured data.

The output file students.txt will be a CSV file:

Name,Age,GPA
Alice,20,3.8
Bob,22,3.5
Charlie,21,3.9
03

Reading from Files

Reading files allows your program to process external data - configuration files, user data, or any text content. The ifstream class makes file input as easy as reading from cin.

Reading Word by Word

The simplest way to read from a file uses the extraction operator >>. This reads whitespace-separated "words" one at a time, just like cin >>.

#include <fstream>
#include <iostream>
#include <string>

int main() {
    std::ifstream inFile("example.txt");
    
    if (!inFile) {
        std::cerr << "Error: Cannot open file!" << std::endl;
        return 1;
    }
std::ifstream is the input file stream class designed for reading. The constructor attempts to open "example.txt" in the current directory. Always check if (!inFile) to verify the file opened successfully. Common failures include: file doesn't exist, wrong path, or insufficient permissions. Without this check, attempting to read from a failed stream leads to undefined behavior.
    std::string word;
    
    // Read word by word until end of file
    while (inFile >> word) {
        std::cout << "Word: " << word << std::endl;
    }
The extraction operator >> reads characters until it hits whitespace (spaces, tabs, newlines). Each iteration of the loop reads one "word". The loop continues while inFile >> word succeeds. When the end of file is reached, the operation fails and the loop exits naturally. This is perfect for processing tokens, but it skips all whitespace - you won't see spaces or newlines in the output.
    return 0;
}
The file automatically closes when inFile goes out of scope (RAII). No explicit close() call needed! This ensures the file is properly closed even if an exception occurs, preventing resource leaks.

Reading Line by Line

Often you need to read entire lines, including spaces. Use std::getline() for this - it reads until it hits a newline character.

#include <fstream>
#include <iostream>
#include <string>

int main() {
    std::ifstream inFile("example.txt");
    
    if (!inFile) {
        std::cerr << "Error: Cannot open file!" << std::endl;
        return 1;
    }
We open the file the same way as before with std::ifstream. The key difference will be using std::getline() instead of >> for reading. This allows us to preserve spaces, tabs, and other characters within each line.
    std::string line;
    int lineNumber = 1;
    
    // Read line by line
    while (std::getline(inFile, line)) {
        std::cout << lineNumber << ": " << line << std::endl;
        lineNumber++;
    }
std::getline(inFile, line) reads all characters up to (but not including) the newline character and stores them in line. Unlike >>, this preserves spaces within the line. The loop continues until end-of-file is reached. We track lineNumber to show which line we're processing - useful for file analysis, log processing, or creating line-numbered output.
    return 0;
}
Again, RAII handles cleanup automatically. This pattern is perfect for configuration files, log analysis, or any text processing where line structure matters. If the file has 100 lines, the loop runs 100 times, processing each line individually.
When to use which?
  • >> - For reading individual values (numbers, single words)
  • getline() - For reading entire lines or text with spaces

Reading the Entire File

Sometimes you need the entire file contents as a single string for processing or analysis.

#include <fstream>
#include <iostream>
#include <string>
#include <sstream>

int main() {
    std::ifstream inFile("example.txt");
    
    if (!inFile) {
        std::cerr << "Error: Cannot open file!" << std::endl;
        return 1;
    }
We include <sstream> to use std::stringstream, which acts as an in-memory string buffer. This allows us to efficiently accumulate the entire file contents before converting to a single string.
    // Method 1: Using stringstream
    std::stringstream buffer;
    buffer << inFile.rdbuf();
    std::string contents = buffer.str();
inFile.rdbuf() returns a pointer to the file's internal buffer. Using << to stream it into stringstream copies the entire file content in one efficient operation. buffer.str() then converts the buffer to a std::string. This method preserves all characters including newlines, spaces, and special characters. It's very fast and memory-efficient for most file sizes.
    std::cout << "File contents:\n" << contents << std::endl;
    
    return 0;
}
Now contents holds the entire file as one string. This is useful for: searching for patterns, passing to parsers, template rendering, or any operation that needs the full context. Be cautious with large files - reading a 1GB file into memory as a string will consume 1GB of RAM!

Reading Numbers from Files

When your file contains numbers, the extraction operator automatically converts them to the appropriate type.

#include <fstream>
#include <iostream>
#include <vector>

int main() {
    // Assume "numbers.txt" contains: 10 20 30 40 50
    std::ifstream inFile("numbers.txt");
    
    if (!inFile) {
        std::cerr << "Error opening file!" << std::endl;
        return 1;
    }
Reading numbers is straightforward - the file should contain whitespace-separated numeric values. The >> operator automatically performs string-to-number conversion. If the file contains "10 20 30", the operator will parse these as integers, not strings.
    std::vector<int> numbers;
    int num;
    
    // Read all integers
    while (inFile >> num) {
        numbers.push_back(num);
    }
The loop reads one integer at a time using inFile >> num. The extraction operator skips whitespace automatically and converts the next sequence of digits to an int. If conversion fails (e.g., encountering text), the operation fails and the loop exits. Each successfully read number is added to the vector for later processing.
    // Calculate sum
    int sum = 0;
    for (int n : numbers) {
        sum += n;
    }
    
    std::cout << "Read " << numbers.size() << " numbers" << std::endl;
    std::cout << "Sum: " << sum << std::endl;  // Sum: 150
    
    return 0;
}
After reading all numbers into the vector, we can perform calculations on them. This example sums all values. The file "10 20 30 40 50" would be read as 5 integers, and their sum is 150. This pattern works for any numeric type: float, double, long, etc. Just change the variable type.

Parsing CSV Files

CSV (Comma-Separated Values) files are common for data storage. Let's parse a student database.

#include <fstream>
#include <iostream>
#include <sstream>
#include <string>
#include <vector>

struct Student {
    std::string name;
    int age;
    double gpa;
};
We define a Student struct to hold each row's data. CSV files typically have structured data where each line represents a record, and commas separate the fields. <sstream> is crucial here - it lets us parse individual lines by treating them as string streams.
int main() {
    std::ifstream inFile("students.txt");
    
    if (!inFile) {
        std::cerr << "Error opening file!" << std::endl;
        return 1;
    }
    
    std::vector<Student> students;
    std::string line;
We'll store all parsed students in a vector. The line variable will hold each row temporarily as we process it. CSV parsing is a two-step process: first read the line, then parse its comma-separated fields.
    // Skip header line
    std::getline(inFile, line);
Most CSV files have a header row (e.g., "Name,Age,GPA") that describes the columns. We read and discard it with a single std::getline() call before processing data rows. Without this, we'd try to parse the header as a student record and fail.
    // Read each data line
    while (std::getline(inFile, line)) {
        std::stringstream ss(line);
        Student student;
        std::string field;
For each line, we create a stringstream to parse it. This lets us use getline() with a custom delimiter (comma) instead of newline. The field variable temporarily holds each comma-separated value before converting it to the appropriate type.
        // Parse comma-separated fields
        std::getline(ss, student.name, ',');
        
        std::getline(ss, field, ',');
        student.age = std::stoi(field);
        
        std::getline(ss, field, ',');
        student.gpa = std::stod(field);
The third parameter ',' tells getline() to stop at commas instead of newlines. For the name, we read directly into student.name since it's already a string. For age and GPA, we read into field first, then convert: std::stoi() converts string to int, and std::stod() converts string to double. Order matters - fields must be parsed in the same order they appear in the CSV!
        students.push_back(student);
    }
After parsing all fields for one student, we add the complete struct to our vector. The loop continues, processing each line of the CSV until end-of-file.
    // Display loaded data
    std::cout << "Loaded " << students.size() << " students:\n";
    for (const auto& s : students) {
        std::cout << s.name << " (Age: " << s.age 
                  << ", GPA: " << s.gpa << ")\n";
    }
    
    return 0;
}
Finally, we verify the data was loaded correctly by displaying each student. This CSV parsing pattern is extremely common in real applications. For more robust CSV handling (handling quoted fields, escaped commas, etc.), consider using a dedicated CSV parsing library. But for simple files, this approach works perfectly!

Practice Questions: Reading & Writing

Task: Create a program that writes your name on the first line and your age on the second line to a file called "info.txt".

Show Solution
#include <fstream>
#include <iostream>

int main() {
    std::ofstream outFile("info.txt");
    
    if (!outFile) {
        std::cerr << "Error creating file!" << std::endl;
        return 1;
    }
    
    outFile << "John Doe" << std::endl;
    outFile << 25 << std::endl;
    
    std::cout << "File written successfully!" << std::endl;
    return 0;
}

Task: Write a program that reads a text file and counts the total number of words.

Show Solution
#include <fstream>
#include <iostream>
#include <string>

int main() {
    std::ifstream inFile("example.txt");
    
    if (!inFile) {
        std::cerr << "Error opening file!" << std::endl;
        return 1;
    }
    
    std::string word;
    int count = 0;
    
    while (inFile >> word) {
        count++;
    }
    
    std::cout << "Total words: " << count << std::endl;
    return 0;
}

Task: Create a program that copies the contents of "source.txt" to "destination.txt" while adding line numbers.

Show Solution
#include <fstream>
#include <iostream>
#include <string>

int main() {
    std::ifstream inFile("source.txt");
    std::ofstream outFile("destination.txt");
    
    if (!inFile || !outFile) {
        std::cerr << "Error opening files!" << std::endl;
        return 1;
    }
    
    std::string line;
    int lineNum = 1;
    
    while (std::getline(inFile, line)) {
        outFile << lineNum << ": " << line << std::endl;
        lineNum++;
    }
    
    std::cout << "Copied " << (lineNum - 1) << " lines!" << std::endl;
    return 0;
}
04

File Modes

File modes control how a file is opened - whether to read, write, append, or handle binary data. Understanding these modes gives you precise control over file operations.

Available File Modes

C++ provides several file mode flags that can be combined using the bitwise OR operator (|):

Mode Flag Description When to Use
std::ios::in Open for reading When you need to read file contents
std::ios::out Open for writing (creates or truncates) When you want to write new content
std::ios::app Append to end of file When adding to existing content (logs)
std::ios::ate Start at end of file When you need to know file size first
std::ios::trunc Truncate file if it exists When you want to clear existing content
std::ios::binary Binary mode (no text transformations) For images, executables, raw data

Combining File Modes

You can combine multiple modes using the | operator:

#include <fstream>
#include <iostream>

int main() {
    // Read and write (file must exist)
    std::fstream file1("data.txt", std::ios::in | std::ios::out);
    
    // Write in binary mode, truncate existing
    std::ofstream file2("data.bin", std::ios::out | std::ios::binary | std::ios::trunc);
    
    // Append in binary mode
    std::ofstream file3("log.bin", std::ios::app | std::ios::binary);
    
    // Read and write, create if doesn't exist
    std::fstream file4("new.txt", std::ios::in | std::ios::out | std::ios::trunc);
    
    return 0;
}

Default Modes

Each stream type has sensible defaults, so you often don't need to specify modes explicitly:

ifstream

Default: std::ios::in
Opens for reading only

ofstream

Default: std::ios::out
Opens for writing, truncates file

fstream

Default: std::ios::in | std::ios::out
Opens for both, file must exist

05

Binary Files

Binary file I/O writes data in its raw memory format, making it faster and more space-efficient than text files. It's essential for images, audio, and complex data structures.

Why Use Binary Files?

Text files store everything as human-readable characters. The number 1000000 takes 7 characters (7 bytes). In binary, the same integer takes only 4 bytes. For large datasets, this difference is significant!

Binary Advantages
  • Smaller file sizes
  • Faster read/write operations
  • Exact data preservation
  • Direct memory-to-file mapping
Binary Disadvantages
  • Not human-readable
  • Platform-dependent (endianness)
  • Harder to debug
  • Struct padding issues

Writing Binary Data

Use write() to write raw bytes. It takes a pointer to the data and the number of bytes to write.

#include <fstream>
#include <iostream>

struct Player {
    char name[50];
    int score;
    float playtime;
};
This Player struct uses fixed-size types: a 50-character array for the name, an integer for the score, and a float for playtime. Using fixed-size fields is crucial for binary I/O because it ensures every Player object occupies the same amount of memory (typically 56-64 bytes depending on padding). This predictability makes binary reading/writing reliable and enables random access to records.
int main() {
    // Open file in binary mode
    std::ofstream outFile("player.dat", std::ios::binary);
    
    if (!outFile) {
        std::cerr << "Error creating file!" << std::endl;
        return 1;
    }
Opening with std::ios::binary is essential. It tells the system to write data exactly as it exists in memory, without any character encoding conversions. Text mode might transform newline characters or interpret certain bytes as special characters, corrupting your binary data. Always use binary mode when working with structs, numbers, or any non-text data.
    // Create player data
    Player player;
    strcpy(player.name, "Alice");
    player.score = 9500;
    player.playtime = 45.5f;
We populate the Player struct with sample data. strcpy() copies the string "Alice" into the fixed-size character array. The remaining fields are assigned directly. This struct now exists in memory as a contiguous block of bytes - exactly how it will be stored in the file.
    // Write the entire struct as binary
    outFile.write(reinterpret_cast<char*>(&player), sizeof(Player));
    
    std::cout << "Saved player data (" << sizeof(Player) << " bytes)" << std::endl;
    
    return 0;
}
The write() method requires two arguments: a char* pointer and the byte count. We use reinterpret_cast<char*> to convert the Player pointer to the required type - this doesn't change the data, just tells the compiler to treat it as raw bytes. sizeof(Player) gives the exact size of the struct in memory. This single call writes all fields at once, much faster than writing each field individually.

Reading Binary Data

Use read() to read raw bytes back into memory. The struct definition must match exactly.

#include <fstream>
#include <iostream>

struct Player {
    char name[50];
    int score;
    float playtime;
};

int main() {
    std::ifstream inFile("player.dat", std::ios::binary);
    
    if (!inFile) {
        std::cerr << "Error opening file!" << std::endl;
        return 1;
    }
The Player struct definition must be identical to the one used when writing. Any difference in field order, types, or sizes will cause data corruption when reading. Opening in binary mode (std::ios::binary) is critical - without it, the system might perform text transformations that corrupt the raw byte data.
    Player player;
    
    // Read the struct from binary file
    inFile.read(reinterpret_cast<char*>(&player), sizeof(Player));
We declare an empty Player struct, then read() fills it with bytes from the file. The reinterpret_cast<char*> converts the Player pointer to the required char* type. sizeof(Player) tells read() how many bytes to load. This reads the exact number of bytes that were written, reconstructing the struct perfectly. After this call, player contains all the data that was saved.
    std::cout << "Name: " << player.name << std::endl;
    std::cout << "Score: " << player.score << std::endl;
    std::cout << "Playtime: " << player.playtime << " hours" << std::endl;
    
    return 0;
}
After reading, all struct members are immediately accessible with their correct values. The char array contains the string, the integer has the score, and the float has the playtime - all exactly as they were when saved. No parsing or conversion needed! Binary I/O preserves data types perfectly, making it much faster and simpler than text-based formats for structured data.

Working with Arrays

Binary I/O shines when working with arrays of data. Let's save and load a vector of integers.

#include <fstream>
#include <iostream>
#include <vector>

int main() {
    // Write an array of integers
    std::vector<int> numbers = {10, 20, 30, 40, 50, 60, 70, 80, 90, 100};
    
    std::ofstream outFile("numbers.bin", std::ios::binary);
We create a vector with 10 integers. Vectors store their elements in contiguous memory, making them perfect for binary I/O. The .data() method gives us a pointer to this memory block, allowing us to write all elements in a single operation.
    // Write size first (so we know how many to read back)
    size_t size = numbers.size();
    outFile.write(reinterpret_cast<char*>(&size), sizeof(size));
Critical step: We write the array size first! When reading back, we need to know how many integers to load. Without this metadata, we'd have no way to determine where the array ends in the file. size_t is typically 4 or 8 bytes depending on the system. This header pattern is common in binary file formats.
    // Write all numbers at once
    outFile.write(reinterpret_cast<char*>(numbers.data()), 
                  size * sizeof(int));
    outFile.close();
numbers.data() returns a pointer to the first element. size * sizeof(int) calculates total bytes (10 numbers × 4 bytes = 40 bytes). This writes all 10 integers in one efficient operation - dramatically faster than writing them individually. The resulting file is only 48 bytes: 8 bytes for size + 40 bytes for integers.
    // Read them back
    std::ifstream inFile("numbers.bin", std::ios::binary);
    
    // Read size
    size_t readSize;
    inFile.read(reinterpret_cast<char*>(&readSize), sizeof(readSize));
Reading follows the same order as writing: first the size, then the data. We read the size metadata (8 bytes) into readSize. This tells us how many integers follow in the file, allowing us to allocate the correct amount of memory for the vector.
    // Read all numbers
    std::vector<int> loaded(readSize);
    inFile.read(reinterpret_cast<char*>(loaded.data()), 
                readSize * sizeof(int));
We create a vector sized to hold readSize integers, then read all 40 bytes at once into its memory. The read() call fills the vector with the exact values that were saved. This single operation is incredibly fast - reading 10 integers takes the same time as reading 1.
    // Verify
    std::cout << "Loaded " << loaded.size() << " numbers: ";
    for (int n : loaded) {
        std::cout << n << " ";
    }
    std::cout << std::endl;
    
    return 0;
}
The output confirms all integers were loaded correctly: "10 20 30 40 50 60 70 80 90 100". Binary I/O perfectly preserved every value. This technique scales beautifully - whether you have 10 or 10 million integers, the code remains the same. Just remember: always write metadata (like size) before the data itself.
Portability Warning: Binary files written on one system may not read correctly on another due to differences in byte order (endianness) or struct padding. For cross-platform data, consider serialization libraries like Protocol Buffers, JSON, or MessagePack.
06

File Positioning and Seeking

File streams maintain a position pointer that tracks where the next read or write will occur. Seeking allows you to move this pointer to any location, enabling random access to file data.

Understanding File Pointers

Every file stream maintains two internal position indicators:

Get Pointer (Input)

Tracks the position for the next read operation. Used by ifstream and fstream.

Put Pointer (Output)

Tracks the position for the next write operation. Used by ofstream and fstream.

Seek Functions

C++ provides four functions for manipulating file positions:

Function Description Example
seekg(pos) Move get pointer (for reading) file.seekg(0) - Start of file
seekp(pos) Move put pointer (for writing) file.seekp(0) - Start of file
tellg() Get current get pointer position pos = file.tellg()
tellp() Get current put pointer position pos = file.tellp()

Seek Directions

You can seek relative to three reference points. Let's explore each direction with practical examples.

#include <fstream>
#include <iostream>

int main() {
    std::fstream file("data.txt", std::ios::in | std::ios::out | std::ios::binary);
    
    if (!file) {
        std::cerr << "Error opening file!" << std::endl;
        return 1;
    }
We open the file with both read (std::ios::in) and write (std::ios::out) permissions in binary mode. This allows us to freely move the read/write pointer in both directions. Binary mode is essential here because it ensures precise byte positioning without any character encoding transformations that text mode might apply.
    // std::ios::beg - Beginning of file (default)
    file.seekg(10, std::ios::beg);  // Move to byte 10 from start
std::ios::beg seeks relative to the beginning of the file (byte 0). This is the most common reference point. file.seekg(10, std::ios::beg) positions the read pointer at exactly byte 10, regardless of where it was before. This is useful when you know the exact offset of data from the file's start, like reading a header at a fixed position.
    // std::ios::cur - Current position
    file.seekg(5, std::ios::cur);   // Move forward 5 bytes from current
    file.seekg(-3, std::ios::cur);  // Move back 3 bytes from current
std::ios::cur seeks relative to the current pointer position. Positive values move forward, negative values move backward. This is perfect for sequential operations where you need to skip ahead or backtrack a few bytes. For example, after reading a 4-byte integer, you might use seekg(-4, std::ios::cur) to go back and re-read it.
    // std::ios::end - End of file
    file.seekg(-10, std::ios::end); // Move to 10 bytes before end
    file.seekg(0, std::ios::end);   // Move to end of file
    
    return 0;
}
std::ios::end seeks relative to the end of the file (the last byte position + 1). Since the end is the reference, you typically use negative offsets to move backward from it. seekg(-10, std::ios::end) positions at 10 bytes before the end, useful for reading file footers or trailers. seekg(0, std::ios::end) moves to the very end, commonly used to determine file size with tellg().

Getting File Size

A common use of seeking is determining file size. Let's explore how to use file positioning to calculate the size of a file without reading its contents.

#include <fstream>
#include <iostream>

int main() {
    std::ifstream file("data.txt", std::ios::binary);
    
    if (!file) {
        std::cerr << "Error opening file!" << std::endl;
        return 1;
    }
Opening in binary mode ensures accurate byte counting regardless of the file's content type. Text mode might modify line endings on some systems, giving an inaccurate size. The error check is essential - attempting to seek in a non-existent file would cause undefined behavior.
    // Seek to end of file
    file.seekg(0, std::ios::end);
The seekg(0, std::ios::end) call moves the read pointer 0 bytes from the end - effectively positioning it at the very last byte of the file. The beauty of this approach is that we don't need to read any data; we're just moving a pointer in memory. This is extremely fast even for gigabyte-sized files.
    // Get position (which is the file size)
    std::streampos fileSize = file.tellg();
    
    std::cout << "File size: " << fileSize << " bytes" << std::endl;
tellg() returns the current position of the read pointer in bytes from the beginning of the file. Since we just moved to the end, this position value is exactly the file size! std::streampos is a special type for file positions that can be compared and printed. This method works for any file type - text, binary, images, videos - anything!
    // Go back to beginning
    file.seekg(0, std::ios::beg);
    
    return 0;
}
After getting the size, the read pointer is at the end of the file. If you want to read data afterwards, you must seek back to the beginning with seekg(0, std::ios::beg). This resets the pointer to byte 0, ready for reading from the start. This pattern is common when you need to allocate memory based on file size before reading the entire file into a buffer.

Random Access to Records

File positioning enables random access to structured data, like accessing specific records in a database. Let's build a system that can read any employee record without reading the entire file.

First, define a fixed-size structure to represent each employee record. Fixed sizes are crucial for calculating positions:

#include <fstream>
#include <iostream>
#include <cstring>

struct Record {
    int id;
    char name[50];
    double salary;
};
// Each Record is sizeof(Record) bytes (typically 64 bytes)
The key to random access is predictability. Each field has a fixed size: int id (4 bytes), char name[50] (50 bytes), and double salary (8 bytes). This means every Record occupies exactly the same space in the file, allowing us to calculate the exact byte position of any record using simple multiplication. If we used std::string instead of char[], the size would be variable, making position calculation impossible.

This structure has a predictable size, making it perfect for random access. Every record occupies the same amount of space in the file.

Create a binary file with several employee records:

int main() {
    // Open file for binary writing
    std::ofstream outFile("records.dat", std::ios::binary);
    
    // Create sample employee data
    Record employees[] = {
        {101, "Alice", 75000.0},
        {102, "Bob", 68000.0},
        {103, "Charlie", 82000.0},
        {104, "Diana", 71000.0}
    };
    
    // Write all records to file
    for (const auto& emp : employees) {
        outFile.write(reinterpret_cast<const char*>(&emp), sizeof(Record));
    }
    outFile.close();
The write() method takes raw bytes from memory and copies them directly to the file. We use reinterpret_cast<const char*> because write() expects a char* pointer, even though we're actually writing a Record struct. The sizeof(Record) tells it exactly how many bytes to copy. This creates a file layout where Record 0 starts at byte 0, Record 1 at byte 64, Record 2 at byte 128, and Record 3 at byte 192 (assuming 64 bytes per record).

This writes 4 employee records sequentially. Each record is written as raw binary data, positioned one after another in the file.

Open the file for reading in binary mode:

    // Open file for binary reading
    std::ifstream inFile("records.dat", std::ios::binary);
    
    if (!inFile) {
        std::cerr << "Error opening file!" << std::endl;
        return 1;
    }
The std::ios::binary flag prevents the operating system from performing any text transformations (like converting \n to \r\n on Windows). Without this flag, byte positions would be unpredictable because line endings might get modified. Binary mode guarantees that byte 128 in the file is always byte 128, with no surprises. Always check if (!inFile) to ensure the file opened successfully before attempting any read operations.

Binary mode ensures no data transformation occurs, maintaining the exact byte positions.

To read the 3rd record directly, calculate its byte position:

    int recordNumber = 2; // 0-indexed (3rd record)
    
    // Calculate position: record size * record number
    std::streampos position = recordNumber * sizeof(Record);
    // position = 2 * 64 = 128 bytes from start
Since all records are the same size, finding any record is simple multiplication. Record 0 is at position 0, Record 1 is at position 64, Record 2 is at position 128, and so on. The formula is: position = recordNumber × sizeof(Record). This is O(1) complexity - accessing the 1000th record takes exactly the same time as accessing the 1st record! Compare this to sequential reading, which would require reading all 999 previous records first (O(n) complexity). This is the power of random access.

Since records are stored sequentially, the 3rd record starts at byte 128 (record 0 at byte 0, record 1 at byte 64, record 2 at byte 128).

Move the file pointer to the calculated position and read the record:

    // Seek to the calculated position
    inFile.seekg(position, std::ios::beg);
    
    // Read the record at that position
    Record emp;
    inFile.read(reinterpret_cast<char*>(&emp), sizeof(Record));
The seekg() function moves the file's read pointer to the specified byte position instantly - it doesn't read any data, just repositions the pointer. std::ios::beg means "from the beginning of the file". Once positioned, read() loads exactly sizeof(Record) bytes (64 bytes) from that position into our emp variable. This is extremely efficient - no matter how large the file is, accessing any record takes the same amount of time because we jump directly to it without reading anything else.

seekg() instantly jumps to byte 128 without reading the first two records. Then read() loads exactly one Record worth of data.

Output the retrieved record:

    std::cout << "Record " << recordNumber + 1 << ":" << std::endl;
    std::cout << "  ID: " << emp.id << std::endl;
    std::cout << "  Name: " << emp.name << std::endl;
    std::cout << "  Salary: $" << emp.salary << std::endl;
    
    return 0;
}
After reading the data, we can access all the struct members directly. The binary data from the file was copied byte-by-byte into our emp variable, reconstructing the exact Record that was written. Notice we add 1 to recordNumber for display purposes (since arrays are 0-indexed but humans prefer counting from 1). This demonstrates that binary I/O perfectly preserves data types - integers remain integers, doubles remain doubles, and character arrays remain strings. No parsing or conversion needed!

Output:

Record 3:
  ID: 103
  Name: Charlie
  Salary: $82000

The program directly accessed the 3rd record without reading the first two - this is random access in action! This technique is extremely efficient for large files.

Use Case: File positioning is essential for:
  • Database-like file structures
  • Updating specific records without rewriting entire files
  • Reading file headers/metadata
  • Implementing file-based data structures (B-trees, indexes)
07

Key Takeaways

Three Stream Classes

Use ifstream for reading, ofstream for writing, and fstream for both operations.

Always Check File State

Use if (!file) or file.is_open() to verify the file opened successfully before reading or writing.

getline() for Lines

Use std::getline() to read entire lines including spaces. Use >> for whitespace-separated values.

Append Mode

Use std::ios::app to add data to existing files without overwriting their contents.

Binary for Efficiency

Use binary mode (std::ios::binary) with read()/write() for faster I/O and smaller files.

RAII File Handling

File streams automatically close when they go out of scope, preventing resource leaks even if exceptions occur.

Knowledge Check

Quick Quiz

Test what you've learned about C++ file I/O

1 Which header file is required for file I/O operations in C++?
2 What is the default behavior of ofstream when opening an existing file?
3 Which function reads an entire line including spaces?
4 What does std::ios::binary mode do?
5 How do you check if a file opened successfully?
6 Which method is used to write raw binary data to a file?
Answer all questions to check your score