Error: Attempting to insert a non-unique value into a unique index: microsoft sql server. when switching from buh prof to bldg and not only. An attempt was made to insert a non-unique value into a unique index. A duplicate value cannot be inserted into a unique index.

You have received a message containing the lines:
Microsoft OLE DB Provider for SQL Server: CREATE UNIQUE INDEX terminated because a duplicate key was found for index ID
or
Cannot I_nsert duplicate key row in object
or
An attempt was made to insert a non-unique value into a unique index.

Solutions:

1. In SQL Server managment studio, we physically destroy the faulty index (in my case it was an index on the accounting register totals table). In 1C we will distribute the faulty documents. In testing and correction mode, check the boxes for table reindexing + recalculation of totals. 1C recreates the index without an error. We carry out previously failed documents.

2. 1) Using Management Studio 2005, I generated a create script to create an index, which was buggy, and saved it to a file.
2) Manually killed the jamb index from the table _AccumRgTn19455
3) Launched a request like
SQL code S_elect count(*), index_fields
FROM AccumRgTn19455
GROUP BY index_field
HAVING count(*)>1
After the index was killed, I had 15 duplicate records displayed, although before step 2 the query did not return anything.
4) I went through all the entries and manually cleaned up duplicates. In fact, I also used the “Report Structure” processing to understand what I was dealing with. It turned out that the table _AccumRgTn19455 stores the accumulation register “Product output (tax accounting)”. I also tinkered with sql queries, identified 15 non-unique documents, and after all the steps were completed, I checked in 1C that these documents were processed normally, without errors. Of course, you shouldn’t just clean tables at random: it’s important to understand what is being cleaned and how it can turn out.
5) Launched a request to create an index, which was saved in a file.
6) Switched the database to single-user mode and launched dbcc checkdb - this time no errors were generated.
7) Switched the base back to single-user mode.
That's it... the problem is overcome. Well, back in 1C I launched “Testing and Correction”, everything went fine there too, I stopped complaining about the non-unique index.

3. If the non-uniqueness lies in dates with zero values, then the problem is solved by creating a database with an offset parameter equal to 2000.

1. If the problem is loading the database, then:
1.1. If you are loading (using a dt-file) into a MS SQL Server database, then when creating the database, before loading, specify the date offset - 2000.
If the database has already been created with offset 0, then create a new one with 2000.

1.2. If it is possible to work with the database in the file version, then perform Testing and Correction, as well as Configuration - Checking the configuration - Checking the logical integrity of the configuration + Searching for incorrect links.

1.3. If there is no file version, try loading from DT into a client-server version with DB2 (which is less demanding on uniqueness), and then perform Test and Correction, as well as Configuration - Verify Configuration - Check the Logical Integrity of the Configuration + Search for Invalid References.

1.4. To localize the problem, you can determine the data of the object whose loading failed. To do this, you need to enable tracing in the Profiler utility during boot or enable recording in the DBMSSQL and EXCP process event log.

2. If the non-uniqueness problem occurs while users are working:

2.1. Find the problematic request using the method in paragraph 1.4.

2.1.2. Sometimes an error occurs while executing queries, for example:

This error occurs due to the fact that in the accumulation register module “Working time of employees of organizations” in the “Register Recalculations” procedure, the service word “DIFFERENT” is not included in the request.
Code 1C v 8.x I.e. should be:
Request = New Request(
"SELECT VARIOUS
| Basic.Individual,
. . . . .
In the latest releases of ZUP and UPP, the error does not occur, because it says "DIFFERENT".

2.2. After finding the problematic index from the previous paragraph, you need to find a non-unique record.
2.2.1. "Fish" script for identifying non-unique records using SQL:
SQL code S_elect COUNT(*) Counter,<перечисление всех полей соответствующего индекса>from<имя таблицы>
GROUP BY<перечисление всех полей соответствующего индекса>
HAVING Counter > 1

2.2.2 Example. The index in the error is called "_Document140_VT1385_IntKeyIndNG".
List of table fields:
_Document140_IDRRef, _KeyField, _LineNo1386, _Fld1387, _Fld1388, _Fld1389, _Fld1390, _Fld1391RRef, _Fld1392RRef, _Fld1393_TYPE, _Fld1393_RTRef, _Fld1393_ RRRef, _Fld1394,_Fld1395, _Fld1396RRef, _Fld1397, _Fld1398, _Fld1399RRef, _Fld22260_TYPE, _Fld22260_RTRef, _Fld22260_RRRef, _Fld22261_TYPE, _Fld22261 _RTRef, _Fld22261_RRRef
Before performing the procedure below, do backup copy databases.
Run in MS SQL Server Query Analyzer:
SQL Code S_elect count(*), _Document140_IDRRef, _KeyField
from _Document140_VT1385
group by _Document140_IDRRef, _KeyField
having count(*) > 1
Use it to find out the values ​​of the _Document140_IDRRef, _KeyField, duplicate records (id, key) columns.

Using the request:
SQL code S_elect *
from _Document140_VT1385
or _Document140_IDRRef = id2 and _KeyField = key2 or ...
look at the values ​​of the other columns of the duplicate entries.
If both entries have meaningful values ​​and the values ​​are different, then change the _KeyField value to be unique. To do this, determine the maximum occupied value of _KeyField (keymax):
SQL Code S_elect max(_KeyField)
from _Document140_VT1385
where _Document140_IDRRef = id1
Replace the _KeyField value in one of the duplicate entries with the correct one:
SQL code update _Document140_VT1385
set _KeyField = keymax + 1
Here _LineNo1386 = is an additional condition that allows you to select one of two repeating records.

If one (or both) of the duplicate entries has an obviously incorrect meaning, then it should be removed:
SQL code delete from _Document140_VT1385
where _Document140_IDRRef = id1 and _LineNo1386 = lineno1
If duplicate entries have the same values ​​in all columns, then you need to leave one of them:
SQL Code S_elect distinct *
into #tmp1
from _Document140_VT1385
where _Document140_IDRRef = id1 and _KeyField = key1

Delete from _Document140_VT1385
where _Document140_IDRRef = id1 and _KeyField = key1

I_nsert into _Document140_VT1385
S_elect #tmp1

D_rop table #tmp1

The described procedure must be performed for each pair of duplicate records.

2.2.3. Second example:
SQL Code S_elect COUNT(*) AS Expr2, _IDRRef AS Expr1, _Description
FROM _Reference8_
GROUP BY _IDRRef, _Description
HAVING (COUNT(*) > 1)

2.3.4 An example of determining non-unique records using a 1C:Enterprise query:
Code 1C v 8.x SELECT Directory.Link
FROM Directory.Directory AS Directory
GROUP BY Directory.Link
HAVING QUANTITY(*) > 1

An error occurs if some objects, details, subcontos in the database have a NULL value, but they cannot have such a value. And this error appears only in SQL databases. Those. If you upload such a database to a file one, then this error will no longer be there. Because The file database has its own tables (4 in total), and SQL has its own. And the SQL database reacts critically to such values ​​in its tables.

This problem cannot be solved by any testing (neither external nor internal) in any version of the database (SQL or file) and even by the _1sp_DBReindex Procedure in the SQL manager, which seems to be supposed to restructure tables in SQL.

Let's look at the solution to the problem using the example of switching from Accounting 3.0 PROF to CORP. After the transition, account 68.01 has a new subaccount, Registration with the Tax Authority. And then, in SQL databases, all documents created in the PRO version that use this account will not be transferred. The error shown above will appear. Because This new sub-account for old documents, in postings, will be written with the value NULL (although there should be an Empty value, or somehow the tax authority).

To fix this error, you need to remove NULL values ​​where they shouldn't be. In this case, in documents where the subaccount Registration with the Tax Authority is used. This can be done by writing a processing that will replace NULL with an Empty value (ready processing can be downloaded from this article). Do it by processing, because An attempt to change the value of this subaccount manually in document postings results in the same error.

Processing for replacing NULLs in all sub-contacts of Registration with the Tax Authority can be downloaded from this article below.

BUT it will not work to replace NULL in the SQL database; during processing the same error will be generated. Therefore you need to do this:

1. Upload the already working version of the SQL database, translated to CORP, into the dt file (in the configurator Administration – Upload database – select where to upload the database as a *.dt file)

2. Load the dt file into the file database (in an unnecessary or pre-prepared, clean file database, in the configurator Administration - Load database - select the previously uploaded dt file)

3. Perform processing in the file database (there will be no errors there and all NULLs will be replaced correctly) (how to perform processing is described below)

5. Now, on the contrary, unload the dt file from the file database and load it into the SQL database. Now, when posting processed documents, errors will not occur.

The processing from this article finds all documents for the specified period in which the postings include subcontract Registration with the Tax Authority (which appears in the CORP version), which has the value NULL. And replaces this value with an Empty value.

In processing, you must indicate the period for which you need to process the documents (you can for the entire period in which records are kept in the database) and click “Fill tabular part" Then you can check the boxes to mark which documents to process (you can select all) and click the “Process” button.

Accordingly, if someone has the same error, but NOT after switching to CORP, but for example after exchanging, loading some data, performing some processing, etc. Then you need to identify where the NULL value was assigned in a specific document/directory and remove this NULL in a similar way, but with your own processing, but in the order described above. Remember that NULL can be, as in document postings, incl. not only accounting ones, but also somewhere on the form of a document/reference book, in some details, but in this case it probably won’t even open.

Also, if this error appeared to you when posting a document, after you transferred the Bukh KORP file database to SQL (and the database was once originally PROF), it means that those documents that were created in the PROF version are now also in the sub-account Registration in the Tax Authority NULL value and the SQL database does not accept this. And when loading the database into SQL, the following error will appear. Here, in fact, there will be no NULL values ​​in the file database, but SQL will load exactly such values ​​into its tables. Therefore, we need to force SQL database create these NULLs and then correct them in the file database. But I can’t tell you how to do this.

You have received a message containing the lines:
Microsoft OLE DB Provider for SQL Server: CREATE UNIQUE INDEX terminated because a duplicate key was found for index ID
or
Cannot I_nsert duplicate key row in object
or
An attempt was made to insert a non-unique value into a unique index.

Solutions:

1. In SQL Server managment studio, we physically destroy the faulty index (in my case it was an index on the accounting register totals table). In 1C we will distribute the faulty documents. In testing and correction mode, check the boxes for table reindexing + recalculation of totals. 1C recreates the index without an error. We carry out previously failed documents.

2. 1) Using Management Studio 2005, I generated a create script to create an index, which was buggy, and saved it to a file.
2) Manually killed the jamb index from the table _AccumRgTn19455
3) Launched a request like
SQL code S_elect count(*), index_fields
FR OM AccumRgTn19455
GROUP BY index_field
HAVING count(*)>1
After the index was killed, I had 15 duplicate records displayed, although before step 2 the query did not return anything.
4) I went through all the entries and manually cleaned up duplicates. In fact, I also used the “Report Structure” processing to understand what I was dealing with. It turned out that the table _AccumRgTn19455 stores the accumulation register “Product output (tax accounting)”. I also tinkered with sql queries, identified 15 non-unique documents, and after all the steps were completed, I checked in 1C that these documents were processed normally, without errors. Of course, you shouldn’t just clean tables at random: it’s important to understand what is being cleaned and how it can turn out.
5) Launched a request to create an index, which was saved in a file.
6) Switched the database to single-user mode and launched dbcc checkdb - this time no errors were generated.
7) Switched the base back to single-user mode.
That's it... the problem is overcome. Well, back in 1C I launched “Testing and Correction”, everything went fine there too, I stopped complaining about the non-unique index.

3. If the non-uniqueness lies in dates with zero values, then the problem is solved by creating a database with an offset parameter equal to 2000.

1. If the problem is loading the database, then:
1.1. If you are loading (using a dt-file) into a MS SQL Server database, then when creating the database, before loading, specify the date offset - 2000.
If the database has already been created with offset 0, then create a new one with 2000.

1.2. If it is possible to work with the database in the file version, then perform Testing and Correction, as well as Configuration - Checking the configuration - Checking the logical integrity of the configuration + Searching for incorrect links.

1.3. If there is no file version, try loading from DT into a client-server version with DB2 (which is less demanding on uniqueness), and then perform Test and Correction, as well as Configuration - Verify Configuration - Check the Logical Integrity of the Configuration + Search for Invalid References.

1.4. To localize the problem, you can determine the data of the object whose loading failed. To do this, you need to enable tracing in the Profiler utility during boot or enable recording in the DBMSSQL and EXCP process event log.

2. If the non-uniqueness problem occurs while users are working:

2.1. Find the problematic request using the method in paragraph 1.4.

2.1.2. Sometimes an error occurs while executing queries, for example:

This error occurs due to the fact that in the accumulation register module “Working time of employees of organizations” in the “Register Recalculations” procedure, the service word “DIFFERENT” is not included in the request.
Code 1C v 8.x I.e. should be:
Request = New Request(
"SELECT VARIOUS
| Basic.Individual,
. . . . .
In the latest releases of ZUP and UPP, the error does not occur, because it says "DIFFERENT".

2.2. After finding the problematic index from the previous paragraph, you need to find a non-unique record.
2.2.1. "Fish" script for identifying non-unique records using SQL:
SQL Code S_elect COUNT(*) Counter,<перечисление всех полей соответствующего индекса>fr om<имя таблицы>
GROUP BY<перечисление всех полей соответствующего индекса>
HAVING Counter > 1

2.2.2 Example. The index in the error is called "_Document140_VT1385_IntKeyIndNG".
List of table fields:
_Document140_IDRRef, _KeyField, _LineNo1386, _Fld1387, _Fld1388, _Fld1389, _Fld1390, _Fld1391RRef, _Fld1392RRef, _Fld1393_TYPE, _Fld1393_RTRef, _Fld1393_ RRRef, _Fld1394,_Fld1395, _Fld1396RRef, _Fld1397, _Fld1398, _Fld1399RRef, _Fld22260_TYPE, _Fld22260_RTRef, _Fld22260_RRRef, _Fld22261_TYPE, _Fld22261 _RTRef, _Fld22261_RRRef
Before performing the procedure below, please back up your database.
Run in MS SQL Server Query Analyzer:
SQL Code S_elect count(*), _Document140_IDRRef, _KeyField
fr om _Document140_VT1385
group by _Document140_IDRRef, _KeyField
having count(*) > 1
Use it to find out the values ​​of the _Document140_IDRRef, _KeyField, duplicate records (id, key) columns.

Using the request:
SQL code S_elect *
fr om _Document140_VT1385
where _Document140_IDRRef = id1 and _KeyField = key1 or _Document140_IDRRef = id2 and _KeyField = key2 or ...
look at the values ​​of the other columns of the duplicate entries.
If both entries have meaningful values ​​and the values ​​are different, then change the _KeyField value to be unique. To do this, determine the maximum occupied value of _KeyField (keymax):
SQL Code S_elect max(_KeyField)
fr om _Document140_VT1385
wh ere _Document140_IDRRef = id1
Replace the _KeyField value in one of the duplicate entries with the correct one:
SQL code upd ate _Document140_VT1385
set _KeyField = keymax + 1

Here _LineNo1386 = is an additional condition that allows you to select one of two repeating records.

If one (or both) of the duplicate entries has an obviously incorrect meaning, then it should be removed:
SQL code delete from _Document140_VT1385
wh ere _Document140_IDRRef = id1 and _LineNo1386 = lineno1
If duplicate entries have the same values ​​in all columns, then you need to leave one of them:
SQL Code S_elect distinct *
into #tmp1
from _Document140_VT1385

Delete from _Document140_VT1385
wh ere _Document140_IDRRef = id1 and _KeyField = key1

I_nsert into _Document140_VT1385
S_elect #tmp1

D_rop table #tmp1

The described procedure must be performed for each pair of duplicate records.

2.2.3. Second example:
SQL Code S_elect COUNT(*) AS Expr2, _IDRRef AS Expr1, _Description
FROM _Reference8_
GROUP BY _IDRRef, _Description
HAVING (COUNT(*) > 1)

2.3.4 An example of determining non-unique records using a 1C:Enterprise query:
Code 1C v 8.x SELECT Directory.Link
FROM Directory.Directory AS Directory
GROUP BY Directory.Link
HAVING QUANTITY(*) > 1

Information taken from the site

You have received a message containing the lines:
Microsoft OLE DB Provider for SQL Server: CREATE UNIQUE INDEX terminated because a duplicate key was found for index ID
or
Cannot I_nsert duplicate key row in object
or
An attempt was made to insert a non-unique value into a unique index.

Solutions:

1. In SQL Server managment studio, we physically destroy the faulty index (in my case it was an index on the accounting register totals table). In 1C we will distribute the faulty documents. In testing and correction mode, check the boxes for table reindexing + recalculation of totals. 1C recreates the index without an error. We carry out previously failed documents.

2. 1) Using Management Studio 2005, I generated a create script to create an index, which was buggy, and saved it to a file.
2) Manually killed the jamb index from the table _AccumRgTn19455
3) Launched a request like
SQL code S_elect count(*), index_fields
FROM AccumRgTn19455
GROUP BY index_field
HAVING count(*)>1
After the index was killed, I had 15 duplicate records displayed, although before step 2 the query did not return anything.
4) I went through all the entries and manually cleaned up duplicates. In fact, I also used the “Report Structure” processing to understand what I was dealing with. It turned out that the table _AccumRgTn19455 stores the accumulation register “Product output (tax accounting)”. I also tinkered with sql queries, identified 15 non-unique documents, and after all the steps were completed, I checked in 1C that these documents were processed normally, without errors. Of course, you shouldn’t just clean tables at random: it’s important to understand what is being cleaned and how it can turn out.
5) Launched a request to create an index, which was saved in a file.
6) Switched the database to single-user mode and launched dbcc checkdb - this time no errors were generated.
7) Switched the base back to single-user mode.
That's it... the problem is overcome. Well, back in 1C I launched “Testing and Correction”, everything went fine there too, I stopped complaining about the non-unique index.

3. If the non-uniqueness lies in dates with zero values, then the problem is solved by creating a database with an offset parameter equal to 2000.

1. If the problem is loading the database, then:
1.1. If you are loading (using a dt-file) into a MS SQL Server database, then when creating the database, before loading, specify the date offset - 2000.
If the database has already been created with offset 0, then create a new one with 2000.

1.2. If it is possible to work with the database in the file version, then perform Testing and Correction, as well as Configuration - Checking the configuration - Checking the logical integrity of the configuration + Searching for incorrect links.

1.3. If there is no file version, try loading from DT into a client-server version with DB2 (which is less demanding on uniqueness), and then perform Test and Correction, as well as Configuration - Verify Configuration - Check the Logical Integrity of the Configuration + Search for Invalid References.

1.4. To localize the problem, you can determine the data of the object whose loading failed. To do this, you need to enable tracing in the Profiler utility during boot or enable recording in the DBMSSQL and EXCP process event log.

2. If the non-uniqueness problem occurs while users are working:

2.1. Find the problematic request using the method in paragraph 1.4.

2.1.2. Sometimes an error occurs while executing queries, for example:

This error occurs due to the fact that in the accumulation register module “Working time of employees of organizations” in the “Register Recalculations” procedure, the service word “DIFFERENT” is not included in the request.
Code 1C v 8.x I.e. should be:
Request = New Request(
"SELECT VARIOUS
| Basic.Individual,
. . . . .
In the latest releases of ZUP and UPP, the error does not occur, because it says "DIFFERENT".

2.2. After finding the problematic index from the previous paragraph, you need to find a non-unique record.
2.2.1. "Fish" script for identifying non-unique records using SQL:
SQL code S_elect COUNT(*) Counter,<перечисление всех полей соответствующего индекса>from<имя таблицы>
GROUP BY<перечисление всех полей соответствующего индекса>
HAVING Counter > 1

2.2.2 Example. The index in the error is called "_Document140_VT1385_IntKeyIndNG".
List of table fields:
_Document140_IDRRef, _KeyField, _LineNo1386, _Fld1387, _Fld1388, _Fld1389, _Fld1390, _Fld1391RRef, _Fld1392RRef, _Fld1393_TYPE, _Fld1393_RTRef, _Fld1393_ RRRef, _Fld1394,_Fld1395, _Fld1396RRef, _Fld1397, _Fld1398, _Fld1399RRef, _Fld22260_TYPE, _Fld22260_RTRef, _Fld22260_RRRef, _Fld22261_TYPE, _Fld22261 _RTRef, _Fld22261_RRRef
Before performing the procedure below, please back up your database.
Run in MS SQL Server Query Analyzer:
SQL Code S_elect count(*), _Document140_IDRRef, _KeyField
from _Document140_VT1385
group by _Document140_IDRRef, _KeyField
having count(*) > 1
Use it to find out the values ​​of the _Document140_IDRRef, _KeyField, duplicate records (id, key) columns.

Using the request:
SQL code S_elect *
from _Document140_VT1385
or _Document140_IDRRef = id2 and _KeyField = key2 or ...
look at the values ​​of the other columns of the duplicate entries.
If both entries have meaningful values ​​and the values ​​are different, then change the _KeyField value to be unique. To do this, determine the maximum occupied value of _KeyField (keymax):
SQL Code S_elect max(_KeyField)
from _Document140_VT1385
where _Document140_IDRRef = id1
Replace the _KeyField value in one of the duplicate entries with the correct one:
SQL code update _Document140_VT1385
set _KeyField = keymax + 1
Here _LineNo1386 = is an additional condition that allows you to select one of two repeating records.

If one (or both) of the duplicate entries has an obviously incorrect meaning, then it should be removed:
SQL code delete from _Document140_VT1385
where _Document140_IDRRef = id1 and _LineNo1386 = lineno1
If duplicate entries have the same values ​​in all columns, then you need to leave one of them:
SQL Code S_elect distinct *
into #tmp1
from _Document140_VT1385
where _Document140_IDRRef = id1 and _KeyField = key1

Delete from _Document140_VT1385
where _Document140_IDRRef = id1 and _KeyField = key1

I_nsert into _Document140_VT1385
S_elect #tmp1

D_rop table #tmp1

The described procedure must be performed for each pair of duplicate records.

2.2.3. Second example:
SQL Code S_elect COUNT(*) AS Expr2, _IDRRef AS Expr1, _Description
FROM _Reference8_
GROUP BY _IDRRef, _Description
HAVING (COUNT(*) > 1)

2.3.4 An example of determining non-unique records using a 1C:Enterprise query:
Code 1C v 8.x SELECT Directory.Link
FROM Directory.Directory AS Directory
GROUP BY Directory.Link
HAVING QUANTITY(*) > 1

Reviews