DevToys.PocoCsv.Core
3.1.3
See the version list below for details.
dotnet add package DevToys.PocoCsv.Core --version 3.1.3
NuGet\Install-Package DevToys.PocoCsv.Core -Version 3.1.3
<PackageReference Include="DevToys.PocoCsv.Core" Version="3.1.3" />
paket add DevToys.PocoCsv.Core --version 3.1.3
#r "nuget: DevToys.PocoCsv.Core, 3.1.3"
// Install DevToys.PocoCsv.Core as a Cake Addin #addin nuget:?package=DevToys.PocoCsv.Core&version=3.1.3 // Install DevToys.PocoCsv.Core as a Cake Tool #tool nuget:?package=DevToys.PocoCsv.Core&version=3.1.3
DevToys.PocoCsv.Core
One of the fastest csv reader deserialzer available.
DevToys.PocoCsv.Core is a class library to read and write to Csv. It contains CsvStreamReader, CsvStreamWriter and Serialization classes CsvReader<T> and CsvWriter<T>.
Read/write serialize/deserialize data to and from Csv.
- RFC 4180 compliant.
- Sequential read with ReadAsEnumerable().
- Csv schema Retrieval with CsvUtils.GetCsvSchema().
- DataTable import and export.
- Deserialiser / serializer.
- Stream reader / writer.
CsvStreamReader
string _file = "C:\Temp\data.csv";
using (CsvStreamReader _reader = new CsvStreamReader(_file))
{
while (!_reader.EndOfStream)
{
string[] _values = _reader.ReadCsvLine();
}
}
or
string _file = "C:\Temp\data.csv";
using (CsvStreamReader _reader = new CsvStreamReader(_file))
{
foreach (string[] items in _reader.ReadAsEnumerable())
{
}
}
CsvStreamWriter
string file = @"D:\Temp\test.csv";
using (CsvStreamWriter _writer = new CsvStreamWriter(file))
{
var _line = new string[] { "Row 1", "Row A,A", "Row 3", "Row B" };
_writer.WriteCsvLine(_line);
}
CsvReader<T>
this reader is faster then CsvStreamReader, it is optamized to deserialize the rows to objects.
public class Data
{
[Column(Index = 0)]
public string Column1 { get; set; }
[Column(Index = 1)]
public string Column2 { get; set; }
[Column(Index = 2)]
public string Column3 { get; set; }
[Column(Index = 5)]
public string Column5 { get; set; }
}
string file = @"D:\Temp\data.csv";
using (CsvReader<Data> _reader = new(file))
{
_reader.Culture = CultureInfo.GetCultureInfo("en-us") ;
_reader.Open();
_reader.SkipHeader();
var _data = _reader.ReadAsEnumerable().Where(p => p.Column1.Contains("16"));
var _materialized = _data.ToList();
}
Methods / Property | Description |
---|---|
BufferSize | Stream buffer size, Default: 1024. |
Close() | Close the CSV stream reader |
Culture | Sets the default Culture for decimal / double conversions etc. For more complex conversions use the ICustomCsvParse interface. |
CurrentLine | Returns the current line number. |
DetectEncodingFromByteOrderMarks | Indicates whether to look for byte order marks at the beginning of the file. |
DetectSeparator() | To auto set the separator (looks for commonly used separators in first 10 lines). |
Dispose() | Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources. |
EmptyLineBehaviour | EmptyLineBehaviour: <li>DefaultInstance: Return a new instance of T (Default)</li><li>NullValue: Return Null value for object.</li><li>SkipAndReadNext: if empty line has occurred, the reader will move to the next line.</li><li>LogError: Create an entry in Errors collecion</li><li>ThrowException: throw an exception when an empty line has occurred.</li> |
Encoding | The character encoding to use. |
EndOfStream | Returns true when end of stream is reached. Use this when you are using Read() / Skip() or partially ReadAsEnumerable() |
Errors | Returns a list of errors when HasErrors returned true |
Flush() | Flushes all buffers. |
HasErrors | Indicates there are errors |
Last(int rows) | Last seeks the csv document for the last x entries. this is much faster then IEnumerable.Last(). |
MoveToStart() | Moves the reader to the start position, Skip() and Take() alter the start positions use MoveToStart() to reset the position. |
Open() | Opens the Reader. |
Read() | Reads current row into T and advances the reader to the next row. |
ReadAsEnumerable() | Reads and deserializes each csv file line per iteration in the collection, this allows for querying mega sized files. It starts from the current position, if you used Skip(), Read() or SkipHeader() the current position is determined by those methods. |
Separator | Set the separator to use (default ',') |
Skip(int rows) | Skip and advances the reader to the next row without interpreting it. This is much faster then IEnumerable.Skip(). |
SkipHeader() | Ensures stream is at start then skips the first row. |
(Skip and Last do not deserialize, that's why they are faster then normal IEnumerable operations).
CsvWriter<T>
this writer is faster then CsvStreamWriter, it is optamized to serialize the objects to rows.
public class Data
{
[Column(Index = 0)]
public string Column1 { get; set; }
[Column(Index = 1)]
public string Column2 { get; set; }
[Column(Index = 2)]
public string Column3 { get; set; }
[Column(Index = 5)]
public string Column5 { get; set; }
}
private IEnumerable<CsvSimple> LargeData()
{
for (int ii = 0; ii < 10000000; ii++)
{
Data _line = new()
{
Column1 = "bij",
Column2 = "100",
Column3 = "test",
Column5 = $"{ii}",
};
yield return _line;
}
}
string file = @"D:\largedata.csv";
using (CsvWriter<CsvSimple> _writer = new(file) { Separator = ',', Append = true })
{
_writer.Culture = CultureInfo.GetCultureInfo("en-us");
_writer.Open();
_writer.Write(LargeData());
}
Methods / Properties:
Item | Description |
---|---|
Open() | Opens the Writer. |
WriteHeader() | Write header with property names of T. |
Write(IEnumerable<T> rows) | Writes data to Csv while consuming rows. |
Flush() | Flushes all buffers. |
Separator | Set the separator to use (default ',') |
CRLFMode | Determine which mode to use for new lines.<li>CR + LF → Used as a new line character in Windows.</li><li>CR(Carriage Return) → Used as a new line character in Mac OS before X.</li><li>LF(Line Feed) → Used as a new line character in Unix/Mac OS X</li> |
NullValueBehaviour | Determine what to do with writing null objects.<li>Skip, Ignore the object</li><li>Empty Line, Write an empty line</li> |
Culture | Sets the default Culture for decimal / double conversions etc. For more complex conversions use the ICustomCsvParse interface. |
Encoding | The character encoding to use. |
ColumnAttribute
The column attribute defines the properties to be serialized or deserialized.
Item | Description |
---|---|
Index | Defines the index position within the CSV document. Numbers can be skipped for the reader to ignore certain columns, for the writer numbers can also be skipped which leads to empty columns. |
Header | Defines the header text, this property only applies to the CsvWriter, if not specified, the property name is used. |
OutputFormat | Apply a string format, depending on the Property type. This property is for CsvWriter only. |
OutputNullValue | Defines the value to write as a default for null, This property is for CsvWriter only. |
CustomParserType | CustomParserType allows for custom parsing of values to a specific type. |
CustomParserType
CustomParserType allows the Reader<T> and Writer<T> to use a custom parsing for a specific field.
public sealed class ParseBoolean : ICustomCsvParse<bool?>
{
// for CsvReader
public bool? Read(StringBuilder value)
{
switch (value.ToString().ToLower())
{
case "on":
case "true":
case "yes":
case "1":
return true;
case "off":
case "false":
case "no":
case "0":
return false;
}
return null;
}
// for CsvWriter
public string Write(bool? value)
{
if (value.HasValue)
{
if (value == true)
{
return "1";
}
return "0";
}
return string.Empty;
}
}
public class ParsePrice : ICustomCsvParse<Decimal>
{
private CultureInfo _culture;
public ParseDecimal()
{
_culture = CultureInfo.GetCultureInfo("en-us");
}
public Decimal Read(StringBuilder value) => Decimal.Parse(value.ToString(), _culture);
public string Write(Decimal value) => value.ToString(_culture);
}
public sealed class CsvPreParseTestObject
{
[Column(Index = 0, CustomParserType = typeof(ParseBoolean) )]
public Boolean? IsOk { get; set; }
[Column(Index = 1)]
public string Name { get; set; }
[Column(Index = 3, CustomParserType = typeof(ParsePrice))]
public Decimal Price { get; set; }
}
using (var _reader = new CsvReader<CsvPreParseTestObject>(_file))
{
_reader.Open();
_reader.Skip(); // Slip header.
var _rows = _reader.ReadAsEnumerable().ToArray(); // Materialize.
}
Custom Parsers will run as singleton per specified column in the specific Reader<T>.
CsvAttribute
the CsvAttribute can be set at defaults for CustomParserType, these CustomParserTypes will be applied to all properties of that specific type.
until they are overruled at property level.
public class Parsestring : ICustomCsvParse<string>
{
public string Read(StringBuilder value)
{
return value.ToString();
}
public string Write(string value)
{
return value;
}
}
[Csv( DefaultCustomParserTypeString = typeof(Parsestring))]
public class CsvAllTypes
{
[Column(Index = 0, OutputFormat = "", OutputNullValue = "")]
public string _stringValue { get; set; }
[Column(Index = 35, OutputFormat = "", OutputNullValue = "")]
public string _stringValue2 { get; set; }
[Column(Index = 1, CustomParserType = typeof(ParseGuid), OutputFormat = "", OutputNullValue = "")]
public Guid _GuidValue { get; set; }
}
Other Examples
public class Data
{
[Column(Index = 0)]
public string Collumn1 { get; set; }
[Column(Index = 1)]
public string Collumn2 { get; set; }
[Column(Index = 2, Header = "Test" )]
public byte[] Collumn3 { get; set; }
[Column(Index = 3)]
public DateTime TestDateTime { get; set; }
[Column(Index = 4)]
public DateTime? TestDateTimeNull { get; set; }
[Column(Index = 5)]
public Int32 TestInt { get; set; }
[Column(Index = 6, OutputNullValue = "[NULL]")]
public Int32? TestIntNull { get; set; }
}
private IEnumerable<Data> GetTestData()
{
yield return new Data
{
Collumn1 = "01",
Collumn2 = "AA",
Collumn3 = new byte[3] { 2, 4, 6 },
TestDateTime = DateTime.Now,
TestDateTimeNull = DateTime.Now,
TestInt = 100,
TestIntNull = 200
};
yield return new Data
{
Collumn1 = "01",
Collumn2 = "AA",
Collumn3 = new byte[3] { 2, 4, 6 },
TestDateTime = DateTime.Now,
TestDateTimeNull = DateTime.Now,
TestInt = 100,
TestIntNull = 200
};
yield return new Data
{
Collumn1 = "04",
Collumn2 = "BB",
Collumn3 = new byte[3] { 8, 9, 10 },
TestDateTime = DateTime.Now,
TestDateTimeNull = null,
TestInt = 300,
TestIntNull = null
};
}
public static string StreamToString(Stream stream)
{
using (StreamReader reader = new StreamReader(stream, Encoding.UTF8))
{
stream.Position = 0;
return reader.ReadToEnd();
}
}
List<Data> _result = new List<Data>();
using (MemoryStream _stream = new MemoryStream())
{
using (CsvWriter<Data> _csvWriter = new CsvWriter<Data>(_stream))
using (CsvReader<Data> _csvReader = new CsvReader<Data>(_stream))
{
_csvWriter.Separator = ';';
_csvWriter.Open();
_csvWriter.WriteHeader();
_csvWriter.Write(GetTestData());
_csvReader.Open();
_csvReader.DetectSeparator(); // Auto detect separator.
_csvReader.Skip(); // Skip header.
_result = _csvReader.ReadAsEnumerable().Where(p => p.Collumn2 == "AA").ToList();
}
}
string _result;
using (MemoryStream _stream = new MemoryStream())
{
using (CsvWriter<Data> _csvWriter = new CsvWriter<Data>(_stream))
{
_csvWriter.Separator = ',';
_csvWriter.Open();
_csvWriter.WriteHeader();
_csvWriter.Write(GetTestData());
_result = StreamToString(_stream);
}
}
Sampling only a few rows without reading entire csv.
List<CsvSimple> _result1;
List<CsvSimple> _result2;
string file = @"D:\largedata.csv";
_w.Start();
using (CsvReader<CsvSimple> _reader = new CsvReader<CsvSimple>(file))
{
_reader.Open();
_reader.Skip(); // skip the Header row.
// Materializes 20 records but returns 10.
_result1 = _reader.ReadAsEnumerable().Skip(10).Take(10).ToList();
// Materialize only 10 records.
_reader.Skip(10);
_result1 = _reader.ReadAsEnumerable().Take(10).ToList();
// Take last 10 records. Without serializing everything before it.
_result1 = _reader.Last(10).ToList();
}
Mind you on the fact that Skip and Take andvances the reader to the next position.
executing another _reader.ReadAsEnumerable().Where(p ⇒ p...).ToList() will Query from position 21.
Use MoveToStart() to move the reader to the starting position.
_reader.Skip() is different then _reader.ReadAsEnumerable().Skip() as the first does not materialize to T and is faster.
DataTable Import / Export
// Import
var _file = @"C:\data.csv";
var _table = new DataTable();
_table.ImportCsv(_file, ',', true);
// Export
_file = @"C:\data2.csv";
_table.ExportCsv(_file, ',');
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net5.0 is compatible. net5.0-windows was computed. net5.0-windows7.0 is compatible. net6.0 is compatible. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net6.0-windows7.0 is compatible. net7.0 is compatible. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-maccatalyst16.1 is compatible. net7.0-macos was computed. net7.0-macos13.0 is compatible. net7.0-tvos was computed. net7.0-windows was computed. net7.0-windows7.0 is compatible. net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-maccatalyst17.0 is compatible. net8.0-macos was computed. net8.0-macos14.0 is compatible. net8.0-tvos was computed. net8.0-windows was computed. net8.0-windows7.0 is compatible. |
.NET Core | netcoreapp3.0 is compatible. netcoreapp3.1 is compatible. |
-
.NETCoreApp 3.0
- No dependencies.
-
.NETCoreApp 3.1
- No dependencies.
-
net5.0
- No dependencies.
-
net5.0-windows7.0
- No dependencies.
-
net6.0
- No dependencies.
-
net6.0-windows7.0
- No dependencies.
-
net7.0
- No dependencies.
-
net7.0-maccatalyst16.1
- No dependencies.
-
net7.0-macos13.0
- No dependencies.
-
net7.0-windows7.0
- No dependencies.
-
net8.0
- No dependencies.
-
net8.0-maccatalyst17.0
- No dependencies.
-
net8.0-macos14.0
- No dependencies.
-
net8.0-windows7.0
- No dependencies.
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.
Version | Downloads | Last updated |
---|---|---|
4.3.1 | 41 | 11/22/2024 |
4.3.0 | 40 | 11/21/2024 |
4.2.5 | 42 | 11/20/2024 |
4.2.4 | 42 | 11/19/2024 |
4.2.3 | 71 | 11/13/2024 |
4.2.2 | 160 | 2/28/2024 |
4.2.1 | 116 | 2/24/2024 |
4.2.0 | 129 | 2/23/2024 |
4.1.2 | 104 | 2/22/2024 |
4.1.1 | 132 | 2/21/2024 |
4.1.0 | 127 | 2/21/2024 |
4.0.1 | 141 | 2/12/2024 |
4.0.0 | 128 | 2/12/2024 |
3.1.13 | 103 | 2/8/2024 |
3.1.12 | 150 | 2/7/2024 |
3.1.11 | 105 | 1/31/2024 |
3.1.10 | 116 | 1/19/2024 |
3.1.9 | 121 | 1/13/2024 |
3.1.8 | 120 | 1/12/2024 |
3.1.7 | 108 | 1/11/2024 |
3.1.5 | 134 | 1/8/2024 |
3.1.3 | 175 | 12/1/2023 |
3.1.2 | 135 | 12/1/2023 |
3.1.0 | 120 | 11/28/2023 |
3.0.7 | 209 | 8/27/2023 |
3.0.6 | 150 | 8/23/2023 |
3.0.5 | 159 | 8/23/2023 |
3.0.4 | 160 | 8/17/2023 |
3.0.3 | 174 | 8/15/2023 |
3.0.2 | 176 | 8/11/2023 |
3.0.1 | 195 | 8/11/2023 |
3.0.0 | 171 | 8/11/2023 |
2.0.7 | 220 | 8/9/2023 |
2.0.5 | 180 | 8/4/2023 |
2.0.4 | 178 | 8/3/2023 |
2.0.3 | 149 | 7/31/2023 |
2.0.2 | 176 | 7/28/2023 |
2.0.0 | 178 | 7/19/2023 |
1.7.53 | 216 | 4/14/2023 |
1.7.52 | 215 | 4/12/2023 |
1.7.51 | 202 | 4/7/2023 |
1.7.43 | 231 | 4/3/2023 |
1.7.42 | 214 | 4/3/2023 |
1.7.41 | 198 | 4/3/2023 |
1.7.5 | 203 | 4/7/2023 |
1.7.3 | 243 | 4/3/2023 |
1.7.2 | 231 | 4/3/2023 |
1.7.1 | 220 | 4/3/2023 |
1.7.0 | 229 | 4/1/2023 |
1.6.3 | 226 | 3/31/2023 |
1.6.2 | 228 | 3/29/2023 |
1.6.1 | 221 | 3/29/2023 |
1.6.0 | 215 | 3/27/2023 |
1.5.8 | 240 | 3/24/2023 |
1.5.7 | 211 | 3/22/2023 |
1.5.6 | 227 | 3/22/2023 |
1.5.5 | 235 | 3/21/2023 |
1.5.4 | 244 | 3/21/2023 |
1.5.1 | 234 | 3/20/2023 |
1.5.0 | 239 | 3/19/2023 |
1.4.5 | 234 | 3/18/2023 |
1.4.4 | 274 | 3/18/2023 |
1.4.3 | 227 | 3/18/2023 |
1.4.2 | 245 | 3/18/2023 |
1.4.1 | 211 | 3/18/2023 |
1.4.0 | 230 | 3/18/2023 |
1.3.92 | 240 | 3/18/2023 |
1.3.91 | 246 | 3/17/2023 |
1.3.9 | 233 | 3/17/2023 |
1.3.8 | 209 | 3/17/2023 |
1.3.7 | 239 | 3/17/2023 |
1.3.6 | 205 | 3/17/2023 |
1.3.5 | 221 | 3/17/2023 |
1.3.4 | 243 | 3/17/2023 |
1.3.3 | 233 | 3/16/2023 |
1.3.2 | 214 | 3/16/2023 |
1.3.1 | 241 | 3/16/2023 |
1.3.0 | 196 | 3/16/2023 |
1.2.0 | 235 | 3/14/2023 |
1.1.6 | 275 | 2/24/2023 |
1.1.5 | 320 | 2/16/2023 |
1.1.4 | 479 | 5/18/2022 |
1.1.3 | 716 | 1/27/2022 |
1.1.2 | 645 | 1/27/2022 |
1.1.1 | 698 | 1/14/2022 |
1.1.0 | 5,843 | 11/23/2021 |
1.0.5 | 394 | 5/11/2021 |
1.0.4 | 338 | 4/14/2021 |
1.0.3 | 378 | 4/12/2021 |
1.0.2 | 336 | 4/12/2021 |
1.0.1 | 317 | 4/7/2021 |
1.0.0 | 389 | 4/7/2021 |
3.1.3
- Added .net 8 support.
3.0.6
- Small refactorings
3.0.4
- CsvWriter CustomParser for strings.
- Exceptions to Errors log when using ICustomCsvParser
3.0.3
- Added SkipAndReadNext option to EmptyLineBehaviour for CsvReader. this will ignore empty lines altogether
- Added LogError and ThrowException to EmptyLineBehaviour.
- Bugfix: not serializing/deserializing enums
- Throw Error on not supported property types.
3.0.0
- CsvStreamReader and CsvReader<T> Performance +10%
- BugFix CsvWriter: Flush on Close().
2.0.6
- Small improvements.
2.0.5
- Small improvements.
2.0.4
- Critical Bugfix: Escaped separator not correctly handled.
- Small performance improvements.
2.0.3
- Bugfix: Deserialize with lesser column count then in CSV.
2.0.2
- Bugfix: not properly reading escaped double quotes.
- Minor improvements
2.0
- Improved CsvWriter<T> speed.
- Extended ICustomCsvParser<T> to be supported by the CsvWriter<T> as well.
- ICustomCsvParser<T>.Parse() has been removed.
- Added Read() and Write() to ICustomCsvParser<T>
- Refactored CsvReader<T> and CsvWriter<T>
- Introduced CsvAttribute to set, at this attribute defaults for ICustomCsvParser can be set at class level.
1.7.53
- Improved CsvStreamReader speed.
- Added ReadAsEnumerable() to CsvStreamReader.
1.7.51
- Added DataTable extensions ImportCsv / ExportCsv
1.7.1
- Changed ICustomCsvParse to generic ICustomCsvParse
1.7
- Added CustomParserType to ColumnAttribute
1.6.3
- Added NullValueBehaviour to CsvWriter<T>
- Added CurrentLine to Reader
- Added LineNumber to Error log
- Added Flush() to Reader<T> and Writer<T>
- Refactored UnitTests in GitHub code Demo Tests and Validate Tests.
1.6.2
- Minor bugfix with CR only ending.
1.6.1
- Fixed bug with AutoDetectSeparator.
- Added EmptyLineBehaviour to CsvReader<T>
- Refactoring
1.6.0
- Added Last(int rows) function to Reader<T>.
- Added IEnumerable<CsvReadError> Errors to CsvReader<T>.
-Fixed Skip() counter.
- Correct handling for CRLF in CsvStreamReader and CsvReader<T>
- \r = CR(Carriage Return) → Used as a new line character in Mac OS before X
- \n = LF(Line Feed) → Used as a new line character in Unix/Mac OS X
- \r\n = CR + LF → Used as a new line character in Windows
- Added CRLFMode to CsvStreamWriter and CsvWriter<T>
1.5.8
- Minor Improvements
- Added Skip() to CsvStreamReader
- Changed EndOfStream behaviour
1.5.7
- Small improvements
1.5.1
- Updated Readme
- Fixed bug with Skip(rows)
- Fixed small bug with ReadAsEnumerable() always started at position 0.
1.5
- Correct handling Null Types for Reader
1.4.5
- Refactoring
- Removed DynamicReader and DynamicWriter
1.4.2
- Another performance improvement for Reader
1.4
- Performance improvements for Writer.
- Added OutputFormat ro ColumnAttribute
1.3.8
- Performance improvement for Reader
1.3.2
- Bug fixes
1.3
- Improved constructors to support all parameters for underlying StreamReader and StreamWriters.
- Added Skip() to CsvReader (to be used in combination Read())
- Added WriteHeader() to CsvWriter()
- Added Header to Column attribute to be used by the CsvWriter
- GetCsvSeparator() / DetectSeparator(),detects more exotic separators.
- Added byte[] to base64 serialization to CsvReader and CsvWriter
1.2
- Added single Read() function.
- Rows() now marked as obsolete.
- Added ReadAsEnumerable() as replacement for Rows()
- Added GetCsvSeparator(int sampleRows) to CsvStreamReader()
- Added DetectSeparator() to CsvReader()
1.1.5
- Bug Fixes
1.1.4
- Added CsvUtils static class including some special Csv functions to use.
1.1.3
- Added CsvWriterDynamic
1.1.1
- Added CsvReaderDynamic
1.1.0
- Speed optimizations (using delegates instead of reflection)
1.0.5
- Read/Write Stream csv lines into a poco object.
- Query / Read / Write large csv files.