Web Factory: Error Creating and Downloading PDF's from Data in a Table

I have an app that stores the content of a PDF file in a byte[] field of a table.

When I run the app in a local mode (windows 10 dev) the app fetches the data, creates and download the stored file properly.

When I moved the data to production - Microsoft Azure, the app corrupted the file by either clipping it or putting junk data at the end. I can see the difference on the files created in the dev environment v/s production via a hex dump of both files.

After 3 days of screwing around trying to understand why the Azure environment would clip the file, I started to look at the code generated by Code Ontime.

Specifically blob.ashx.cs around line 950. Look like the buffer reading is not correct or behaving badly in an Azure architecture. I really don't know and right now I don't care.

I modified the code to read the full data content in one read and I believe that the variable streamLength.ToString() is being used incorrectly to tag the size of the file.

Why would you have 2 variables with terribly close names beats me!

streamLength.ToString != stream.lenght.ToString() - at least in Azure

Here is the modified code with the commented initial code.

// context.Response.AddHeader("Content-Length", streamLength.ToString());
context.Response.AddHeader("Content-Length", stream.Length.ToString());
if (stream.Length == 0)
{
context.Response.StatusCode = 404;
return;
}
//stream.Position = offset;
//buffer = new byte[(1024 * 32)];
//int bytesRead = stream.Read(buffer, 0, buffer.Length);
//while (bytesRead > 0)
//{
// context.Response.OutputStream.Write(buffer, 0, bytesRead);
// offset = (offset + bytesRead);
// bytesRead = stream.Read(buffer, 0, buffer.Length);
//}
stream.Position = offset;
buffer = new byte[stream.Length];
int bytesRead = stream.Read(buffer, 0, buffer.Length);
if (bytesRead > 0)
{
context.Response.OutputStream.Write(buffer, 0, bytesRead);
//offset = (offset + bytesRead);
//bytesRead = stream.Read(buffer, offset, buffer.Length);
}

Yes, I know that if the data to be read is huge, the new byte[stream.lenght] call is a terrible programming idea, but my data files are small and I need this thing done.

If a Product Manager for the product wants to speak further about this issue, give a shout.

This needs to fixed.
1 person has
this problem
+1
Reply
  • Carlos,

    Can you make the following change to the blob handler (the first line in the catch expression) and report if the fix solves the issue? 


    try
    {
    // Correction for Northwind database image format
    offset = 78;
    streamLength = (streamLength - 78);
    stream.Position = offset;
    buffer =
    new byte[(stream.Length - offset)];
    stream.Read(buffer, 0, buffer.Length);
    img =
    Image.FromStream(new MemoryStream(buffer, 0, buffer.Length));
    }
    catch (Exception ex)
    {
    streamLength = stream.Length;
    offset = 0;
    context.Trace.Write(ex.ToString());
    }
  • (some HTML allowed)
    How does this make you feel?
    Add Image
    I'm

    e.g. indifferent, undecided, unconcerned happy, confident, thankful, excited kidding, amused, unsure, silly sad, anxious, confused, frustrated

  • Dennis,

    The above fix seems to have fixed the problem.

    I will run the code for a bit and let you know if I find the issue again.

    I assume that I will be copying the code every time I generate new code until you incorporate this fix into a patch.

    Hope you have a trick to keep the blob handler from being overwritten.

    Thanks!

    Carlos
  • (some HTML allowed)
    How does this make you feel?
    Add Image
    I'm

    e.g. indifferent, undecided, unconcerned happy, confident, thankful, excited kidding, amused, unsure, silly sad, anxious, confused, frustrated